All reject rules now silent discards, vacation reply requires spam protection

One of the original design problems with email is that none of the email addresses in an email are certified or guaranteed in any way (when it was designed, the Internet wasn’t full of spammers and other hostile parties like it is now).

This flaw allows spammers to put any email addresses they want in any part of an email message. There are systems that attempt to limit this problem (like SPF), but these only help mitigate the issue.

One significant problem this ability to forge email addresses causes is something called "backscatter". Backscatter occurs when a spam email with a forged from address is sent to a system and that system then generates another message in response and sends it back to the forged from address. The most commonly auto-generated response messages by systems are non-delivery notifications (bounce messages) or auto-replies (vacation responses).

In that case the response message goes to some random address the spammer made up, which might be a mailbox at any system, including a spamtrap address that can affect the reputation of our sending IP addresses.

While FastMail has systems in place to try and detect these backscatter messages from other systems and file them into Junk Mail, it’s still possible for FastMail to be a source of these backscatter messages as well, which as noted above can affect the reputation of our sending IP addresses.

To reduce the chance of FastMail being a source of backscatter, we’ve now made two changes.

  1. Until now, the rules to reject/discard messages on the Options –> Define Rules screen were labelled under "Reject emails". There was a separate "Silent" checkbox which controlled whether the email was silently discarded or whether a reject/bounce message was sent back to the sender of the message.

    When FastMail first started 10 years ago, the default was to always reject emails, that is, generate a bounce message. Several years ago, we changed it so the Silent checkbox was checked by default, meaning that silently discarding emails was the default behaviour.

    We’ve now completely removed the Silent checkbox and renamed the section "Discarding emails" as silent discard is now the only option. This will completely eliminate bounces generated by user filtering rules.

  2. Until now, you could setup a vacation auto-response on the Options –> Define Rules screen under the Forward tab at any time.

    We’ve now changed this so that if you want to enable a vacation response message, you must enable at least Normal level spam protection on your account. This will ensure that in the vast majority of cases, we never send a vacation reply to any spam messages.

For users with existing reject rules, those rules have now all been changed to silent discard rules.

For users with existing vacation reply settings enabled, the vacation reply has been disabled if the user does not have Normal level or higher spam protection enabled. Unfortunately this means for Guest & Member accounts, you cannot re-enable vacation replies until you upgrade your account to at least Ad Free as Guest & Member accounts do not support anything but Basic level spam protection.

We’re sorry for any inconvenience these changes cause, but they are required to protect the reputation of our outgoing IP addresses, which is required to allow all users to send email with high reliability.

Posted in News. Comments Off

Changing password or disabling IMAP/POP logins now closes any existing IMAP/POP connections

We’ve just made a change today so that if you go to the Options –> Account Preferences screen and change your password, or disable IMAP or POP logins, then we will immediately close any existing open IMAP or POP connections.

This security enhancement is particularly useful if you have a mobile device that is lost or stolen. By changing your password via the web interface on another device/computer, you will immediately force any existing IMAP/POP connections to be logged out and prevent any further logins from that device because the password will no longer be correct.

We also plan in the future to allow expiring web sessions from other machines as well. We’ll announce on this blog when that feature is ready.

Posted in News. Comments Off

Building the new AJAX mail UI part 2: Better than templates, building highly dynamic web pages

This is part 2 of a series of technical posts documenting some of the interesting work and technologies we’ve used to power the new interface (see also part 1, Instant notifications of new emails via eventsource/server-sent events). Regular users can skip these posts, but we hope technical users find them interesting.

As dynamic websites constructed entirely on the client side become de rigueur, there are a number of templating languages battling it out to become the One True Way™ of rendering your page. All follow essentially the same style: introduce extra control tags to intersperse with HTML. But if we go back to basics, HTML is simply a way of serialising a tree structure into a text format that is relatively easy for humans to edit. Once the browser receives this, it then has to parse it to generate an internal DOM tree representation before it can draw the page.

In an AJAX style application, we don’t transmit HTML directly to the browser. Instead, we generate the HTML on the client side, and often update the HTML in different parts of the page over time as the user interacts with the application. As string manipulation for building HTML from data objects is hard to write and error-prone, we normally use a template language and a library that compiles these snippets into code; this executes with a data context, producing a string of HTML that may be set as an element’s innerHTML property. The browser then builds a DOM tree, which we can query to update nodes and add event listeners.

There is, however, another alternative for building the DOM tree: directly in JavaScript. Modern browsers are very fast at parsing and executing JavaScript. What if, with the help of a liberal sprinkling of syntactic sugar, we were to build the DOM tree in code instead? Start by considering a simple function el to declare an element.

el( 'div' )

OK, so far we’ve just renamed the document.createElement method. What next? Well, we’re going to want to add class names and ids to elements a lot. Let’s use the CSS syntax which everyone knows and loves.

el( 'div#id.class1.class2' );

Hmm, that’s quite clean and readable compared to:

<div id="id" class="class1 class2"></div>

What else? Well, there may be other attributes. Let’s pass them as a standard hash:

el( 'div#id', { tabindex: -1, title: 'My div' })

That’s pretty neat. Let’s have a quick look at the html for comparison:

<div id="id" tabindex="-1" title="My div"></div>

A node’s not much use on its own. Let’s define a tree:

var items = [ 1, 2, 3, 4 ];
el( 'div#message', [
    el( 'a.biglink', { href: '' }, [
        'A link to Google'
    el( 'ul', [
        items.forEach( function( item ) {
            return el( 'li.item', [ item + '. Item ] );
    items.length > 1 ? 'There are lots of items'.localise() + '. ' : null,
    'This is just plain text. <script>I have no effect</script>'

So what have we achieved? We’ve got a different way of writing a document tree, which is essentially very similar to HTML but changes the punctuation slightly to make it valid JavaScript syntax instead. So what? Well, the point is this readable declaration is directly executable code; we just need to define the el function: As it’s pure JS, we can replace static strings with variables. We can easily add conditional nodes, as shown in the example above. We can call other functions to generate a portion of the DOM tree or use array iterators to cleanly write loops. Wrap it all in a function and we can pass different data into the function each time to render our DOM nodes… we have ourselves a template.


While innerHTML used to be much faster than JS DOM methods, this no longer holds for modern browsers. Let’s have a look at a benchmark:

Here we have four different methods of rendering the same bit of HTML. This is a real-world snippet, taken from a core part of our new webmail application (, with just a few class names changed. Let’s first look at the hand-optimised innerHTML method and hand-optimised DOM method. In Chrome the DOM version is over 50% faster than using innerHTML and in Safari it’s 45% faster. Firefox is just as fast with either, while Opera is marginally faster using innerHTML. IE is still twice as fast using innerHTML rather than DOM methods. Perhaps most interesting though is to look at mobile browser performance. On desktop, computers are fast enough these days that the performance differences are less of an issue. On mobile it’s crucial, and here we find that the DOM method is anywhere from 45% to 100% faster in mobile WebKit browsers, such as Safari on the iPhone and the default Android browser, and level with innerHTML on Opera Mobile.

A few things to note before we look at the real-world tests. Firstly, for maximum speed, the innerHTML method is assuming all text is already escaped; a very dangerous assumption. The DOM method on the other hand needs to make no such assumptions, as text is added to the DOM tree by creating text nodes. Since the text is never parsed as HTML, there is zero chance of accidentally injecting a malicious script tag. Secondly, if you need a reference to any of the DOM nodes you’re creating (for example to save for updating later or to add event listeners), with the innerHTML method you must query the DOM after you’ve constructed it. With direct DOM construction, you already have the node reference; you just save it as you create it.

These hand-optimised functions are fast, but unmaintainable and a pain to write. Let’s move on to something we would use on a real website.

Handlebars is a popular JS templating language, and claims to be one of the fastest around. It produces a string for use with innerHTML to construct the DOM elements. Let’s compare that to the JS declarative approach I outlined above (which I’m going to call Sugared DOM). Compared to the raw methods, the Sugared DOM was more-or-less equal in performance to the hand-optimised innerHTML in Chrome and Safari, even on the iPhone. It’s equal to or faster than Handlebars templates (sometimes by a significant margin) in all browsers other than IE, and crucially on mobile browsers it’s anywhere from 50% to 100% faster. Note too that the initial compilation time for Handlebars templates is not included in these benchmarks.


On almost all modern browsers the Sugared DOM method is faster than normal templates, even when ignoring the compile-time cost the latter have. There are other benefits as well:

  • Easy to debug (the template declaration is the code).
  • The sugar code is much smaller than any decent templating library.
  • No need to query the DOM, as you can just save references to nodes you’ll need later as you create them. This is faster and may remove the need for a whole JS library you currently use (like Sizzle).
  • No escaping worries; zero chance of XSS bugs. When you include a string in the declaration it is explicitly set as a text node, so is never parsed as HTML. <script> tags are harmless!
  • No extraneous white-space text nodes. White space between block-level nodes in HTML does not affect the rendering, but it does add extra nodes to the DOM. These can be a pain when you’re manipulating it later (the firstChild property may not return what you expect) and increases the memory usage of the page.
  • As it’s pure JS, the templates can be easily included inline as part of view classes that also handle the behaviour of the view, or kept in separate files.
  • JSHint will validate your syntax; much easier than tracking down syntax errors from a template’s compiler.
  • Flexibility to use the full power of JS; easily call other functions to generate parts of your DOM tree, localise a string, or do whatever else you like.

What are the downsides? Well, it’s slightly slower in Internet Explorer (although still plenty fast enough in real world use) and the difference in syntax to HTML may take a little time to become accustomed to, especially if templates are written by designers rather than coders (then again every template introduces its own syntax, so I’m not sure there’s much difference here). And, err, I think that’s about it.

It’s time to ditch HTML based templates. Embrace the DOM, and enjoy your powerful, fast and readable new way to render pages.

Written by Neil Jenkins

Posted in News, Technical. Comments Off

Monthly bandwidth limits removed

TL;DR: All monthly bandwidth limits for emails and files have been removed. Existing hourly limits still apply, and new daily limits have been added. The monthly sum of the new daily limits is significantly higher than the old monthly limits.

More details: When building an email service, one of the things we realised quickly early on is that you have to deal with abuse and resource limiting issues. If you don’t set any limits, people will abuse your service.

Because we always regarded speed and reliability as highly important, when we first chose our data center, we picked a place with a great network (NYI), however that came with higher bandwidth costs, which meant that we had to put systems in place to track and limit users monthly bandwidth usage.

Additionally over time we added systems to track email sending and receiving in real time, and added hourly limits to stop mass mail floods or spam sending runs.

Since adding these limits, we’ve found that the short term limits have become more important to stop abuse, while the monthly limits have become less and less of an issue, and from today, we’re now removing all monthly bandwidth quotas.

We are still implementing hourly quotas, and have also added daily quotas, though the sum of these daily quotas over a month is significantly higher than the previous monthly quotas and is a large increase for all users.

Email bandwidth quotas (all in MB)

  Recv hourly Sent hourly Sent+recv hourly Sent+recv daily Sent+recv monthly
(daily x 30)
Old (sent+recv) monthly
Guest                     30                     30                   60                 120                  3,600                 160
Member                     60                     60                 120                 240                  7,200                 160
Ad free                     60                     60                 120                 240                  7,200                 640
Full                   300                   300                 600             1,200                36,000             1,200
Enhanced                1,000                1,000             2,000             4,000              120,000             4,000
Lite                   300                   300                 600             1,200                36,000                 800
Everyday                1,000                1,000             2,000             4,000              120,000             1,600
Superior                2,000                2,000             4,000             8,000              240,000             4,000
Basic                   300                   300                 600             1,200                36,000             1,600
Standard                1,000                1,000             2,000             4,000              120,000             4,000
Professional                2,000                2,000             4,000             8,000              240,000           16,000


File bandwidth quotas (all in MB)

  Hourly Daily Monthly
(daily x 30)
Old monthly
Guest                     10                     20                 600                   80
Member                     10                     20                 600                   80
Ad free                     10                     20                 600                 160
Full                   500                1,000           30,000             4,000
Enhanced                1,000                2,000           60,000           16,000
Lite                   500                1,000           30,000                 160
Everyday                1,000                2,000           60,000             4,000
Superior                3,000                6,000         180,000           40,000
Basic                   500                1,000           30,000                 320
Standard                1,000                2,000           60,000             4,000
Professional                3,000                6,000         180,000             8,000
Posted in News. Comments Off

Building the new AJAX mail UI part 1: Instant notifications of new emails via eventsource/server-sent events

With the release of the new AJAX user interface into testing on the Fastmail beta server, we decided that it might be interesting to talk about the technology that has gone into making the new interface work. This post is the first of a series of technical posts we plan to do over the next few months, documenting some of the interesting work and technologies we’ve used to power the new interface. Regular users can skip these posts, but we hope technical users find them interesting.

We’re starting the series by looking at how we push instant notifications of new email from the server to the web application running in your browser. The communication mechanism we are using is the native eventsource/server-sent events object. Our reasons for choosing this were threefold:

  1. It has slightly broader browser support than websockets (eventsource vs websockets)
  2. We already had a well defined JSON RPC API, using XmlHttpRequest objects to request data from the server, so the only requirement we had was for notifications about new data, which is exactly what eventsource was designed for
  3. For browsers that don’t support a native eventsource object, we could fallback to emulating it closely enough without too much extra code (more below), so we need only maintain one solution.

We’re using native eventsource support in Opera 11+, Chrome 6+, Safari 5+ and Firefox 6+. For older Firefox versions, the native object is simulated using an XmlHttpRequest object; Firefox allows you to read data as it is streaming. Internet Explorer unfortunately doesn’t, and whilst there are ways of doing push using script tags in a continually loading iframe, they felt hacky and less robust, so we just went with a long polling solution there for now. It uses the same code as the older-Firefox eventsource simulation object, the only difference is that the server has to close the connection after each event is pused; the client then reestablishes a new connection immediately. The effect is the same, it’s just a little less efficient.

Once you have an eventsource object, be it native or simulated, using it for push notifications in the browser is easy; just point it at the right URL, then wait for events to be fired on the object as data is pushed. In the case of mail, we just send a ‘something has changed’ notification. Whenever a new notification arrives, we invalidate the cache and refresh the currently displayed view, fetching the new email.

On the server side, the event push implementation had a few requirements and a few quirks to work with our existing infrastructure.

Because eventsource connections are long lived, we need to use a system that can scale to a potentially very large number of simultaneous open connections. We already use nginx on our front end servers for http, imap and pop proxying. nginx uses a small process pool with a non-blocking event model and epoll on Linux, so it can scale to a very large number of simultaneous connections. We regularly see over 30,000 simultaneous http, imap and pop connections to a frontend machine (mostly SSL connections), with less than 1/10th of total CPU being used.

However, with a large number of client connections to nginx, we’d still have to proxy them to some backend process that could handle the large number of simultaneous connections. Fortunately, there is an alternative event based approach.

After a little bit of searching, we found a third party push stream module for nginx that was nearly compatible with the W3C eventsource specification. We contacted the author, and thankfully he was willing to make the changes required to make it fully compatible with the eventsource spec and incorporate those changes back into the master version. Thanks Wandenberg Peixoto!

Rather than proxying a connection, the module accepts a connection, holds it open, and connects it to an internal subscriber "channel". You can then use POST requests to the matching publisher URL channel to send messages to the subscriber, and the messages will be sent to the client over the open connection.

This means you don’t have to hold lots of internal network proxy connections open and deal with that scaling, instead you just have to send POST requests to nginx when an "event" occurs. This is done via a backend process that listens for events from cyrus (our IMAP server), such as when new emails are delivered to a mailbox, and (longer term) when any change is made to a mailbox.

Two other small issues also need to be dealt with. First is that only logged in users should be able to connect to an eventsource channel, and second is that we have two separate frontend servers and clients connect randomly to one of the other because each hostname resolves to two IP addresses, so the backend needs to send POST requests to the correct frontend nginx server the user is connected to.

We do the first by accepting the client connection, proxying to a backend mod_perl server which does the standard session and cookie authentication, and then use nginx’s internal X-Accel-Redirect mechanism to do an internal redirect that hooks the connection to the correct subscriber channel. For the second, we add a "X-Frontend" header to each proxied request, so that the mod_perl backend knows which server the client is connected to.

The stripped down version of the nginx configuration looks like this:

    # clients connect to this URL to receive events
    location ^~ /events/ {
      # proxy to backend, it'll do authentication and X-Accel-Redirect
      # to /sub/ if user is authenticated, or return error otherwise
      proxy_set_header   X-Frontend   frontend1;
      proxy_pass         http://backend/events/;
    location ^~ /subchannel/ {
      push_stream_eventsource_support on;
      push_stream_content_type "text/event-stream; charset=utf-8";
    # location we POST to from backend to push events to subscribers
    location ^~ /pubchannel/ {
      # prevent anybody but us from publishing
      deny    all;

Putting the whole process together, the steps are as follows:

  1. Client connects to
  2. Request is proxied to a mod_perl server
  3. The mod_perl server does the usual session and user authentication
  4. If not successful, an error is returned, otherwise we continue
  5. The mod_perl server generates a channel number based on the user and session key
  6. It then sends a POST to the nginx process (picking the right one based on the X-Frontend header) to create a new channel
  7. It then returns an X-Accel-Redirect response to nginx which tells nginx to internally redirect and connect the client to the subscriber channel
  8. It then contacts an event pusher daemon on the users backend IMAP server to let it know that the user is now waiting for events. It tells the daemon the user, the channel id, and the frontend server. After doing that, the mod_perl request is complete and the process is free to service other requests
  9. On the backend IMAP server, the pusher daemon now waits for events from cyrus, and filters out events for that user
  10. When an event is received, it sends a POST request to the frontend server to push the event over the eventsource connection to the client
  11. One of the things the nginx module returns in response to the PUSH request is a "number of active subscribers" value. This should be 1, but if it drops to 0, we know that the client has dropped its connection, so at that point we don’t need to monitor or push any more events for that channel, and internally cleanup so we don’t push any more events for that user and channel. The nginx push stream module automatically does this on the frontend as well.
  12. If a client drops a connection and re-connects (in the same login session), it’ll get the same channel id. This avoids potentially creating lots of channels

In the future, we will be pushing events when any mailbox changes are made, not just a new email delivery (e.g. change made in an IMAP client, a mobile client, or another web login session). We don’t currently do this because we need to filter out notifications due to actions made by the same client; since it already knows about these, invalidating the cached would be very inefficient.

In general this all works as expected in all supported browsers and is really very easy to use. We have however come across a few issues to do with re-establishing lost connections. For example, when the computer goes to sleep then wakes up, the connection will have probably been lost. Opera has a bug in that it doesn’t realise this and keeps showing that the connection is OPEN (in readyState 1).

We’ve also found a potential related issue with the spec itself: "Any other HTTP response code not listed here, and any network error that prevents the HTTP connection from being established in the first place (e.g. DNS errors), must cause the user agent to fail the connection". This means that if you lose internet connection (for example pass through a tunnel on the train), the eventsource will try to reconnect, find there’s no network and fail permanently. It will not make any further attempts to connect to the server once a network connection is found again. This same problem can cause a race condition when waking a computer from sleep as it often takes a few seconds to re-establish the internet connection. If the browser tries to re-establish the eventsource connection before the network is up, it will therefore permanently fail.

This spec problem can be worked around by observing the error event. If the readyState property is now CLOSED (in readyState 2), we set a 30 second timeout. When this fires, we create a new eventsource object to replace the old one (you can’t reuse them) which will then try connecting again; essentially this is manually recreating the reconnect behaviour.

The Opera bug in not detecting it’s lost a connection after waking from sleep can be fixed by detecting when the computer has been asleep and manually re-establishing the connection, even if it’s apparently open. To do this, we set a timeout for say 60s, then when it fires we compare the timestamp with when the timeout was set. If the difference is greater than (say) 65s, it’s probable the computer has been asleep (thus delaying the timeout’s firing), and so we again create a new eventsource object to replace the old one.

Lastly, it was reasonably straight forward to implement a fully compatible eventsource implementation in Firefox using just a normal XmlHttpRequest object, thereby making this feature work in FF3.5+ (we haven’t tested further back, but it may work in earlier versions too). The only difference is that the browser can’t release from memory any of the data received over the eventsource connection until the connection is closed (and they could be really long lived), as you can always access it all through the XHR responseText property. However, we don’t actually know if the other browsers actually make this optimisation with their native eventsource implementations, and given the data pushed through the eventsource connection is normally quite small, this certainly isn’t an issue in practice.

This means we support Opera/Firefox/Chrome/Safari with the same server implementation. To add Internet Explorer to the mix we use a long polling approach. To make the server support long polling all we do is make IE set a header on an XmlHttpRequest connection (we use X-Long-Poll: Yes), and if the server sees that header it closes the connection after every event is pushed; other than that it’s exactly the same. This also means IE can share FF’s eventsource emulation class with minimal changes.

The instant notification of new emails is one of core features of the new interface that allows the blurring of boundaries between traditional email clients and webmail clients. Making this feature work, and work in a way that we knew was scalable going forward was an important requirement for the new interface. We’ve achieved this with a straight forward client solution, and in a way that elegantly integrates with our existing backend infrastructure.

Posted in News, Technical. Comments Off

Change of default MX records for domains

This post contains some technical information mostly useful for people that host email for their own domain at FastMail.

TL;DR: If you host email for your domain at FastMail, but host the DNS for your domain at an external DNS provider, we recommend you login to your DNS provider and change the two MX records for your domain from in[12] to in[12] i.e. replace the first dot (‘.’) with a dash (‘-’)

If you host email for your domain at FastMail, and you host the DNS for your domain at FastMail, no change is required, it’s all automatically been done.

More details: For many years, the default MX records for domains hosted at FastMail have been and

However it turns out there’s a small problem with this. The hostnames in[12] don’t match the wildcard * SSL certificate we have (similar to this previous issue). So if a remote system uses opportunistic TLS encryption to send email to us, the connection will be encrypted, but it may be reported as "Untrusted" because the certificate doesn’t match.

This isn’t disastrous, but it is annoying and exposes a potential man-in-the-middle attack.

So we’ve gone and changed the DNS MX records for all domains hosted at FastMail to default to and

For users that use us to host DNS for their domains, no change is required on your behalf, all of this has been automatically updated.

For users that use an external DNS provider, we recommend you update the MX records for your domains at your DNS hosting provider. We’ll continue to support the old in[12].smtp values for some time and alert users if/when we discontinue it, but the sooner you make the change, the better it is for the secure transmission of email to your domain.

We’ve updated our documentation to reflect these new values.

Posted in News, Technical. Comments Off

New webmail user interface being tested on beta server

Just in time for Christmas, we’re releasing our new webmail interface for testing on our beta server.

The new interface is the culmination of many months of work from many different team members, and has a number of new and powerful features.

  1. Full AJAX design with caching, pre-fetching and optimistic actions

    Rather than having to reload the entire page on each view or action, only the data that is needed is loaded from the server and displayed on the page. After you’ve viewed a message, that data is cached while you’re logged in, so viewing the message again is instant. While viewing a message, next and previous messages are pre-loaded so moving between messages is very quick. When applying an action (e.g. move message, delete message, etc.), the action is immediately applied on the screen and sent to the server making actions appear instant.

    Like the previous interface, there’s many keyboard shortcuts like ‘j’ and ‘k’ to move to the next/previous message, ‘x’ to mark message, ‘m’ to move the current/selected message(s), ‘g’ to search the folder listing, and ‘.’ (dot) to bring up the action menu for the current/selected message(s).

    All these features put together make using the new interface one of the fastest mail experiences available.

  2. Full conversations support across folders

    All messages are grouped together into conversations. A conversation represents the back and forth sending of messages on a particular topic. The conversation system we’ve built works across folders, so when clicking on a conversation to read it, you’ll see a stream of all related messages in all folders, including any messages filed into other folders, your own sent messages in your Sent Items folder, and any unfinished drafts you might have started in reply to a message in a conversation.

    This allows you to quickly see the historical context of any new message without having to dig through your saved messages to see the past messages, or what you sent in your last message.

  3. Archiving is the new default action

    After looking at the statistics of mailboxes on our system, we found that many people didn’t create any folders in their accounts, and instead just kept everything in their Inbox. This results in a large and cluttered Inbox, and makes it harder to find messages that need dealing with or responding to. Because of the large increase in storage space available to most people relative to the volume of email they get, the old paradigm of deleting email as soon as you’ve read it is less relevant, and instead it’s better just to save it in an Archive.

    So to make managing your email easier, we’ve now made Archive the default action. You can think of Archiving an email as "I just don’t want to see this in my Inbox any more, but I don’t want to permanently delete it either".

  4. Push updates when new email arrives

    When new emails arrive in your Inbox, they’ll be immediately pushed to your browser, no need to refresh to see when new emails have arrived.

We hope you enjoy trying out the new interface and the powerful new features.

A further note: the new interface is a work in progress. Look out for further updates posted to our blog in the new year.

Posted in News. Comments Off

New Opera/Fastmail office

After many months of planning and organisation, the Opera mail services team/Fastmail have finally moved in to the new Opera Australia office in Melbourne, Australia.


The bold new entrance. The photo doesn’t do justice to the great textured floor.


All the Melbourne staff, from left: Andrew, Neil, Marian, Alfie (front), Rob, Richard (front) and Greg.


Our nice big break-out room and kitchen area.


All the offices have big windows to let in lots of natural light.

Thanks to Alfie for all his hard work in organising the building, fit out and move to the new office.

Posted in News. Comments Off

"View" link removed from attachments on message read screen in "Public Terminal" mode

When you enable the "Public Terminal" option on the login screen, Fastmail sets the "no-cache" and "no-store" cache control headers on every page. This means that browsers should not store a copy of the pages you visit (e.g. emails you read) to their local disk. Even after you logout of your session and leave the computer, someone comes along and tries to view a page in the browser history, it should re-check with the server first, which of course will return "this user is now logged out, show the login page instead".

However this is a problem with this whole setup related to attachments. When an email has an attachment, the content of the attachment might be in a form the browser doesn’t understand (e.g. Microsoft Word document). In that case, the browser has to save a copy of the attachment to the local disk, and then launch Microsoft Word to open the file.

Now in the case of the "View" link, the saving to disk would be done automatically into a temporary file storage area. However in IE, if you try and download an SSL document with the no-cache or no-store attributes set, IE will explicitly not save the file to disk, and then when it tries to launch Microsoft Word to read the file, you’ll get a "file does not exist" error or the like.

For other browsers, it appears they work around this problem by actually saving a copy to disk in the temporary storage area, but they delete the file when you close the browser (at least that’s what Firefox did when I tested). That still potentially does leave the file on disk for some time.

To ensure the best privacy possible, while still allowing people to view attached documents in "Public Terminal" mode, we’ve decided to do the following:

  • When you login with the "Public terminal" mode, we’ve removed the "View" link next to attachments. This solves two problems; the unexpected "file not found" in IE, and the privacy concern of storing attachments to disk in the temporary file area of other browsers
  • We’ve left the "View" link next to image attachments, because the web browser can display images itself, without launching a separate program, so it can obey the "no-cache"/"no-store" directives
  • With the "Download" link (which automatically brings up a "Save as…" dialog box), we’ve removed the "no-cache" and "no-store" settings, which means that IE will let you download it and save it somewhere so you can open it to view the document.

We like this solution because it makes things clearer to the user. In "Public Terminal" mode, if you want to view an attachment, you have to download it first, explicitly save it somewhere and then view it. The alternative approach of letting the browser do it either fails (IE), or causes an auto-save of the file to a temporary area which leaves it temporarily cached on the machine when the user doesn’t expect it.

Posted in News, Technical. Comments Off

TCP keepalive, iOS 5 and NAT routers

This post contains some very technical information. For users just interested in the summary:

If over the next week you experience an increase in frozen, non-responding or broken IMAP connections, please contact our support team (use the "Support" link at the top of the homepage) with details. Please make sure you include your operating system, email software, how you connect to the internet, and what modem/router/network connection you use in your report.

The long story: The IMAP protocol is designed as a long lived connection protocol. That is, your email client connects from your computer to the server, and stays connected for as long as possible.

In many cases, the connection remains open, but completely idle for extended periods of time while your email client is running but you are doing other things.

In general while a connection is idle, no data at all is sent between the server and the client, but they both know the connection still exists, so as soon as data is available on one side, it can send it to the other just fine.

There is a problem in some cases though. If you have a home modem and wireless network, then you are usually using a system called NAT that allows multiple devices on your wireless network to connect to the internet through one connection. For NAT to work, your modem/router must keep a mapping for every connection from any device inside your network to any server on the internet.

The problem is some modems/routers have very poor NAT implementations that "forget" the NAT mapping for any connection that’s been idle for 5 minutes or more (some appear to be 10 minutes or more). What this means is that if an IMAP connection remains idle with no communication for 5 minutes, then the connection is broken.

In itself this wouldn’t be so bad, but the way the connection is broken is that rather than telling the client "this connection has been closed", packets from the client or server just disappear which causes some nasty user visible behaviour.

The effect is that if you leave your email client idle for 5 minutes and the NAT connection is lost, if you then try and do something with the client (e.g. read or move an email), the client tries to send the appropriate command to the server. But the TCP packets that contain the command never arrive at the server, but neither are RST packets sent back that would tell the client that there’s any problem with the connection, the packets just disappear. So the local computer tries to send again after a timeout period, and again a few more times, until usually about 30 seconds later, it finally gives up and marks the connection as dead, and finally sends that information back up to the email client, which shows some "connection was dropped by the server" type message.

From a user perspective, it’s a really annoying failure mode that looks like a problem with our server, even though it’s really because of a poor implementation of NAT in their modem.

However this is a workaround for this. At the TCP connection level, there’s a feature called keepalive that allows the operating system to send regular "is this connection still open?" type packets back and forth between the server and the client. By default keepalive isn’t turned on for connections, but it is possible to turn it on via a socket option. nginx, our frontend IMAP proxy, allows you to turn this on via a so_keepalive configuration option.

However even after you’ve enabled this option, the default time between keepalive "ping" packets is 2 hours. Fortunately again, there’s a Linux kernel tuneable net.ipv4.tcp_keepalive_time that lets you control this value.

By lowering this value to 4 minutes, it causes TCP keepalive packets to be sent over open but idle IMAP connections from the server to the client every 4 minutes. The packets themselves don’t contain any data, but what they do do is cause any existing NAT connection to be marked as "alive" on the users modem/router. So poor routers with NAT connections that would normally timeout after 5 minutes of inactivity are kept alive, so the user doesn’t see the nasty broken connection problem described above, and neither is there a visible downside to the user either.

So this is how things have been for the last 4-5 years, which has worked great.

Unfortunately, there’s a new and recent problem that has now appeared.

iOS 5 now uses long lived persistent IMAP connections (apparently previous versions only used short lived connections). The problem is that our ping packets every 4 minutes mean that the device (iPhone/iPad/iPod) is "woken up" every 4 minutes as well. This means the device never goes into a deeper sleep mode, which causes significantly more battery drain when you setup a connection to the Fastmail IMAP server on iOS 5 devices.

Given the rapid increase in use of mobile devices like iPhones, and the big difference in battery life it can apparently cause, this is a significant issue.

So we’ve decided to re-visit the need for enabling so_keepalive in the first place. Given the original reason was due to poor NAT routers with short NAT table timeouts, that was definitely an observed problem a few years back, but we’re not sure how much of a problem it is now. It’s possible that the vast majority of modems/routers available in the last few years have much better NAT implementations. Unfortunately there’s no way to easily test this, short of actually disabling keepalive, and waiting for users to report issues.

So we’ve done that now on, and we’ll see over the next week what sort of reports we get. Depending on the number, there’s a few options we have:

  1. If there’s lots of problem reports, we’d re-enable keepalive by default, but setup an alternate server name like that has keepalive disabled, and tell mobile users to use that server name instead. The problem with this is many devices now have auto configuration systems enabled, so users don’t even have to enter a server name, so we’d have to work out how to get that auto configuration to use a different server name
  2. If there’s not many problem reports, we’d leave keepalive off by default, but setup an alternative server name like that has keepalive enabled, and for users that report connection "freezing" problems, we’d tell them to switch to using that server name instead
  3. Ideally, we’d detect what sort of client was connecting, and turn on/off keepalive as needed. This might be possible using software like p0f, but integrating that with nginx would require a bit of work, and still leaves you with the problem of an iPhone user that is usually in their office/home all day and uses a wireless network with a poor NAT router, would they prefer the longer battery life, or better connectivity experience.

I’ll update this post in a week or two when we have some more data.

Posted in News, Technical. Comments Off

Get every new post delivered to your Inbox.

Join 4,622 other followers