Testing your Lua scripts can sometimes be a bit tedious. It usually involves injecting a message in order to trigger the callout that will execute the code and output a message to the log. You must then open the log in order to check the log entry.

There is a better way. The Momentum console is extensible so you can add a command used solely for testing code. This does away with the need for injecting a message and looking for log entries in paniclog.ec.

Console Command

require(“msys.core”);

require(“xml”);

 

local function test_code(cc)

— put code you wish to test here

local doc = xml.parsexml([[<doc></doc>]]);

local node = doc:root();

local child = node:addchild(“item”);

child:attr(“name”, “Junior”);

child:contents(“I am a child node.”);

— use print for console output

print(node:tostring());

end

 

msys.registerControl(“test_code”, test_code);

Comments

  • This code uses the XML library so it must be included.
  • Choose whatever name you wish for your function. The parameter passed to a control function is control construct userdata — we needn’t be concerned with it here but if you do want to pass an argument to the console command access it in the following way: cc.argv[1]. The code that you want to test goes inside this function.
  • This print statement will output node as text, verifying that the XML object has been created. You do not need to send a test email or check for log entries in paniclog.ec.
  • You must use the msys.registerControl function to register your console command. You can register any number of commands from the same script file so, if you wish, you can keep adding functions.

Test your code by issuing the command /opt/msys/ecelerity/bin/ec_console /tmp/2025 test_code. Invoking the console in this way-in batch mode-executes the command test_code and immediately exits the console. You should see output such as the following:

<doc>

<item name=”Junior”>I am a child node.</item>

</doc>

Errors will also be output to the screen. For example, if you attempt to pass nil to the child:contents function you will see the following error message:

…/msys/ecelerity/etc/conf/global/lua_scripts/ec_console.lua:21:

bad argument #1 to ‘contents’ (string expected, got nil)

The console provides a very convenient way of testing code but it has limitations. You have no access to userdata such as an ec_message so you cannot test message object methods. Additionally, some Lua functions can only be used during specific callouts and require that a message transit Momentum.

Running into the same old issues whenever you try to send high volumes of email? You need Momentum, the all-in-one platform for email, mobile, integration and analytics. Get the Overcoming the Challenges of High Volume Sending white paper today!

Overcoming The Challenges of High Volume Sending

First, a disclaimer. This post provides a general guide on how to backup a Postgres Database Table. The examples provided do not refer to any specific table or database. If you are trying to backup a specific table, you should also backup tables which refer to the original tables. As Postgres is a relational database, there will always be references between a number of tables.

This is a ‘how to’ doc which only gives example commands, how it is done actually is solely dependent on the person doing it and specific use cases. Always maintain the integrity of the data you are trying to backup. Backing up the entire database is always a safe option, but when backing up specific tables one needs to be careful.

SQL-dump/pg_dump:

The idea behind the SQL-dump method is to generate a text file with SQL commands that, when fed back to the server, will recreate the database in the same state as it was in at the time of the dump. PostgreSQL provides the utility program pg_dump for this purpose.

pg_dump is an effective and comprehensive tool to take Postgres database backups and use the backups to restore the postgres database. It is however not restricted to Database only. We can use pg_dump to backup tables and then use that to restore individual tables as well. Using pg_dump you can backup the local database and restore it on a remote database.

How to Backup Postgres Database :

  1. Backup a Postgres Table:

    $ /opt/msys/3rdParty/bin/pg_dump

    –table maincontrol.orgs -U ecuser pe -f ms_table.sql

    The above command is an example of how to backup a specific table from a Postgres database. Using the above command we are backing up table ‘orgs’ in schema ‘maincontrol’  from database ‘pe’ to ms_table.sql file. To backup a specific table, use the –table TABLENAME option in the pg_dump command. If there are same table names in different schema then use the –schema SCHEMANAME option.

  2. Backup a Specific Postgres database :

    $ /opt/msys/3rdParty/bin/pg_dump -U ecuser pe -f pe_dump.sql

    This is an example of backing up a specific Postgres database. Using the above command we are backing up the database for Message Central ‘pe’ to file pe_dump.sql. The backup file has create table, alter table and copy commands for all the tables in the ‘pe’ database.

  3. Backup all Postgres Databases :

    $ /opt/msys/3rdParty/bin/pg_dumpall -U ecuser> all_dump.sql

    You can backup all the databases using pg_dumpall command. The above command will create a dump of all the databases that reside on the Postgres instance running on a particular server. To list all the database that have been backed up use the command grep “^[\]connect” all.sql.

How to Restore Postgres Database

  1. Restore a Postgres table :

    $ /opt/msys/3rdParty/bin/psql -U ecuser -f ms_table.sql pe

    The above command will install the table that was backed up in ms_table.sql file to the ‘pe’ database. Make sure that this table does not already exist, or you will see a bunch of ‘already exists’ errors. This command creates the table and exports all the data to the newly created table.

  2. Restore a postgres Database :

    $ /opt/msys/3rdParty/bin/psql -U ecuser -d pe -f pe_dump.sql

    Similar to restoring the table, we can use the above command to restore the complete database. Here we are restoring the ‘pe’ database using the file pe_dump.sql which we had created while backing up the database in the Backup section above.

  3. Restore all Databases :

    $ /opt/msys/3rdParty/bin/psql -u ecuser -f all_dump.sql

    Restore all the databases using the above command. ‘all_dump.sql’ was the file that was created using pg_dumpall. The above command will give us all the Postgres databases in the exact state that they were in when a dump was taken from the original database server.

Message Systems software is used to send 20% of the world’s legitimate email. Find out why more brands use our Momentum software.

Blog_Ads_TEI-Momentum_100413

Four years ago, Message Systems had a single product offering in Momentum version 2. While this was a groundbreaking MTA, it was really built for the singular function of processing email very, very quickly. Momentum 2 stored a number of logs but had no local database and did not really need one.

When Momentum version 3 was introduced, it came with a new dashboard that used an embedded PostgreSQL database. Shortly after this release, the product offering expanded to include content creation and campaign tools that also made use of the PostgreSQL database. Intelligent adaptive message shaping was also added utilizing a Riak database. Most recently, introspection and tracing tools were added that make use of a Lucene database.

So why use all these databases? Each of these types of data stores are particularly good at certain functions so we use different databases to make the overall platform solution as efficient as possible. The PostgreSQL database is a powerful SQL relational database with the ability to store complex tables as well as procedures for advanced processing. This is used for the bulk of the data workload in the Message Systems platform.

Riak is a non-SQL database that is designed to work in a distributed environment. It is essentially a key-value pair data store, similar in concept to Hadoop, that is extremely fast and durable. It replicates natively across a cluster to provide fast access to data to any node in a cluster. We use this primarily for the storage and retrieval of message shaping settings for the Adaptive Delivery® component.

Lucene is an Apache project based in Java that delivers extremely high performance text search capabilities. In our case, we record message transaction logs to a Lucene data store so that we can search them at a later date. When necessary, a customer service representative can execute a search against this Lucene data store for any message using a wide variety of factors.

So you can see that each of these data store technologies are widely different and serve very different purposes. Message Systems includes each because they are the best at what they do and they work together to provide the most efficient platform overall.

More Information:

Message Systems Documentation (client login required):

Learn more about why the world’s best brands are using Momentum in the Total Economic White Paper by Forrrester!

Blog_Ads_TEI-Momentum_100413

Starting with Momentum Version 3.0, Message Systems has included an embedded copy of the Lua language in the bundle as a replacement for Sieve. For years Message Systems customers used Sieve as a policy engine to script actions based on message details, but Sieve, even in its modified Sieve++ form, is limited. Lua is a big improvement, offering the ability to iterate over data and work with tables, among other features. Just like Sieve, we have modified our embedded version of Lua to extend its functionality in numerous ways.

Not surprisingly, Lua has a large following in the programming world. It’s a well-developed, multi-purpose language that is often used by game developers for its simplicity and speed, as well as its inherent ability to expose and use C modules as well as its own native scripting. In Momentum, Lua has been extended with a number of local functions specifically to help with messaging rules. Aside from the extensive documentation on our custom functions, there is also a whole community of Lua users and developers on the web (links at the end of the article).

You will see in the sample below that the code itself is quite simple to follow, and anyone who has written a policy script in Sieve will see this as manna from heaven. So what does Lua look like in Momentum? I can show you using a very basic policy script that will assign messages to a binding based on an X-Header value. Here is the heavily commented policy script:

— This policy script is a common, but very simple example
–[[
You can use comments in Lua with a “–” for a single line or multi-line comments can be framed as this one is.
]]–
— As with many languages, you can “require” helper modules
— We have developed dozens of helpers that make message management easier

require(“msys.core”)
require(‘msys.extended.message’);


— We define the following variable as “local” and make it a “table” using braces
— IE: the variable “mod” is a local table

local mod = {};


— We have predefined function names for each message phase.
— This one is active specifically in the set_binding phase.

function mod:validate_set_binding(msg)


— getting the value of the X-Binding header is this easy:

   local mybinding = msg:header(“X-Binding” );


— Assigning a binding of the same name as the header is just as easy:

   local err = msg:binding(mybinding);


— proper etiquette requires us to “end” the function

end;


— and finally we register the script so Momentum can use it.

msys.registerModule(“policy”, mod);

Toggle Comments »

Enabling the script to run with Momentum just requires adding a reference to the ecelerity.conf file:

scriptlet scriptlet {
   script policy {
      source = “/opt/msys/ecelerity/etc/conf/default/lua/policy.lua”
   }
}

That is all there is to it. This is a very basic example but is probably the most common use of scripting in Momentum. Our clients have installed some extremely complex functionality with Lua including auto-responders, message cadence (recipient fatigue) protection and automated database list hygiene systems. If you do a little research you will find that Lua is embedded or core to many popular pieces of familiar code including Angry Birds, Civilization V, World of Warcraft, Far Cry, AutumnOut, the Sputnik wiki engine, and the Cisco Adaptive Security Appliance. We are not alone in thinking Lua is awesome.

If you’re interested in gaining a better understanding of the Lua language, I recommend the book Programming in Lua, Second Edition by Roberto Ierusalimschy — A.K.A. the Lua Bible. There’s also a ton of links on the Internet, and I’ve listed a few of them below. By the way, Lua is a proper name, and is not an acronym. Portuguese for “moon,” Lua was originally developed by a team at the Pontifical Catholic University of Rio de Janeiro in Brazil.

Tech Tips: Momentum Performance Tuning Tips

by Oleksiy Kovyrin, Senior Technical Operations Engineer, LivingSocial

We’re pleased to present Oleksiy Kovyrin as our first guest columnist for Tech Tips. This article originally appeared in Oleksiy’s blog and he shared it in the Message Systems LinkedIn group, which is how we first learned about it. If you’re digging into Momentum or any of our products, we urge you to join this smart and highly vocal group. Take it away Oleksiy:

One of my first tasks as part of the technical operations team at LivingSocial was to figure out a way to make our messaging software perform better and deliver faster. We use Momentum, and it is really fast, but I’m always looking for ways to squeeze as much speed out of our system as possible.

While working on it I’ve created a set of scripts to integrate Momentum with Graphite, for all kinds of crazy stats graphing. Those scripts will be opensourced soon, but for now I’ve decided to share a few tips about performance-related changes we’ve made to improve our performance at least 2x:

  • Use EXT2 Filesystem for the spool storage — After a lot of benchmarking we noticed that we’ve been doing way too much I/O compared to our throughput. Some investigation showed that the EXT3 filesystem we were using for the spool partition resulted in way too high metadata-update overhead because the spool storage uses a lot of really small files. Switching to EXT2 helped us gain at least 50-75% additional performance. Additional performance gain was caused by turning on the noatime option for our spool.There are some sources who claim that using XFS for spool directories is a better option, but we’ve decided to stick with EXT2 for now.
  • Do not use %h{X} macro in your custom logs —Custom logs is an awesome feature of Momentum and we use it to log our bounces along with some information from mail headers. Unfortunately, the most straightforward thing to do (using %h{X} macro) was not the best option for I/O-loaded servers because every time Momentum needs to log a bounce it needs to swap message body in from the disk and parse it to get you the header value.To solve this issue we’ve created a Sieve+ policy script that would extract the headers we need from a message during the initial spooling phase (when the message is still in memory) and put those values into the message metadata. This way, when we need to log those values we won’t have to swap message body in from the disk. Here is the Sieve script to extract header value:SEE FORMATTING FOR THE SCRIPT HERE: http://kovyrin.net/2012/01/07/momentum-ecelerity-tuning-tips/

    require [ “ec_header_get”, “vctx_mess_set”, “ec_log” ];

    # Extract x-ls-send-id header to LsSendId context variable
    # (later used in deliver log)

    ($send_id) = ec_header_get “x-ls-send-id”;

    vctx_mess_set “LsSendId” $send_id;

    Toggle Comments »

    After this we could use it in a custom logger like this:

    custom_logger “custom_logger1”

    {

    delivery_logfile = “cluster:///var/log/ecelerity/ls-delivery_log.cluster=>master”

    delivery_format =”%t@%BI@%i@%CI@D@%r@%R@%m@%M@%H@%p@%g@%b@%vctx_mess{LsSendId}”

    delivery_log_mode = 0664

    }

    Editor’s Note: For those who are not familiar with Momentum, it was formerly named Ecelerity, and this is still how the product is designated in the code, as in line 3.

  • Give more RAM to Momentum —When Momentum receives a message, it stores it to the disk (as required by the SMTP standard) and then tries to deliver the copy it has in memory, and if delivery succeeds, the on-disk copy is unliked. The problem with a really heavy outbound traffic load is that momentum needs to keep tons of emails in memory, but by default it can only hold 250. With a load of 250-500 messages a second this is just too small.To change this limit we’ve increased the Max_Resident_Active_Queue parameter to 1000000 (of course we made sure to have enough RAM to hold that many messages if needed) and Max_Resident_Messages to 0 (which means unlimited). This allows Momentum to keep as many messages resident as possible and reduce the load caused by swap-in operations required for re-delivery attempts, etc.Editor’s Note (IMPORTANT): Changing the Max_Resident_Active_Queue and Max_Resident_Messages are advanced settings which require careful planning and a thorough understanding of how those changes will impact the memory in your system. The changes listed above have worked for Living Social, but they won’t work for everyone. In particular, the amount of memory available to you, which features you are making use of in the system, and your average message size, can all affect the sizing of these two parameters and their impact on the system. We recommend you carefully review the documentation on these settings, and follow up with support on any further questions or concerns you may have before implementing changes to these settings.
  • Choose a proper size for your I/O-related thread pools —In the default Momentum configuration the SwapIn and SwapOut thread pool sizes are set to 20. Under a really high load even on our 4xSAS15k RAID10 this tends to be too high a value. We’ve switched those pools to 8 threads each, and this helped to reduce I/O contention and overall I/O throughput.In summary, as with any optimizations, before tuning your system, it really helps to set up as much monitoring for your Momentum servers as possible: Cacti graphs, Graphite (mentioned above), Ganglia or something else — it doesn’t matter. Just make sure you can observe all the aspects of your system performance and understand what is going on with your system before changing any performance-related settings.

Momentum features a configuration repository that resides on the Cluster Manager (CM). This allows Momentum server nodes to pull configuration updates every minute (from cron). Local copies of this configuration reside in a “working copy (WC) directory” of the same name on EACH cluster node (server and manager):

/opt/msys/ecelerity/etc/conf/default

The Momentum Web UI provides customers with the ability to make configuration changes from within the UI. While some customers use this interface, many choose to make configuration modifications from the OS command line. This article describes a best practice for making configuration changes to ecelerity.conf from the command line in a single subcluster Momentum environment.

  1. Login to one your Momentum server nodes (not the CM)
  2. Be sure that you’re using the latest version of the configuration. Invoking this from the command line:
    /opt/msys/ecelerity/bin/eccfg pull –u username –p user_password
  3. Edit the configuration. Use your favorite editor such as vim or nano to make changes to ecelerity.conf from within the local WC directory of the Momentum server and save.
  4. Check the syntax. Do this by invoking
    /opt/msys/ecelerity/bin/validate_config
    This application will return “Configuration valid” for valid configurations. However, this will “return quietly” if the configuration is NOT valid. Running ec_dump_config will give you verbose output and give you a “good hint” relative to your configuration error.
  5. Check the configuration by running “config reload” from within ec_console.
  6. Commit the configuration to the repository. Use the eccfg application:
    /opt/msys/ecelerity/bin/eccfg commit –u username –p user_password

NOTE: There are some cases where validate_config may give you a false positive. One example is when using “node local” configurations (see below). In these cases, you’ll need to run validate_config and config reload on EACH Momentum server to be sure that your local configuration is valid.

Other configuration-related tips:

  1. Momentum allows “node local” configuration: Create a directory such as
    /opt/msys/ecelerity/etc/conf/serverhostname
    where “serverhostname” is the hostname. Any configuration files that are saved in this directory will be included in ecelerity.conf (assuming you’ve used the include directive within ecelerity.conf) because these “node local” directories are part of Momentum’s default search path for configuration files.
  2. There is also a global repository that contains configuration that is common to every node in every subcluster (eg. mbus.conf). That checkout resides here:
    /opt/msys/ecelerity/etc/conf/global
  3. Check sieve syntax. Use this ec_console directive (for example):
    sieve:sieve testfile each_rcpt_phase1 /path/to/each_rcpt.siv
  4. Check Lua syntax. Use rcluac
    /opt/msys/3rdParty/bin/rcluac    file.lua

Tom Mairs
Manager, Solution Engineering

Tom Cain
Manager, Technical Training

One of the common questions I am asked is whether Momentum can be configured to rotate logs on an hourly basis instead of on a daily basis. The answer is yes. Log rotation is scheduled at /etc/cron.d/msys-ecelerity-core but keep in mind that ec_rotate is configured to retain only seven rotations. So if you switch to hourly rotations you will only retain seven hours of log data.

Retention is configured in /opt/msys/ecelerity/etc/ec_rotate.conf (if the file is not present you can copy it in from /opt/msys/ecelerity/etc/sample-configs/ec_rotate.conf) as “retention = 7”, if you want to rotate on an hourly basis and still maintain a week’s worth of logs you need to change this value to 168 (7 days * 24 rotations).

The most common reason for such regular log rotation is to decrease the lag between an event being logged and that event being updated in a backend database by a log processor. Momentum includes support for JLOG, an indexed log format that allows for near real time log processing, support check-pointing and automatic garbage collection. Watch this space for a future update on how you can take advantage of JLOG to simplify your log processing using JLOG.