Planet Varnish

August 26, 2015

Hrafnhildur SmaradottirVarnish Software cited in Forrester’s CDN And Digital Performance Vendor Landscape Report

We are very pleased to have been included in Forrester Research’s recent “CDN And Digital Performance Vendor Landscape, Q3 2015”  report by analysts Mark Grannan with Ted Schadler and Kevin Driscoll.

August 23, 2015

Kacper WysockiMy Varnish pet peeves

I’ve been meaning to write a blog entry about Varnish for years now. The closest I’ve come is to write a blog about how to make Varnish cache your debian repos, make you a WikiLeaks cache and I’ve released Varnish Secure Firewall, but that without a word on this blog. So? SO? Well, after years it turns out there is a thing or two to say about Varnish. Read on to find out what annoys me and people I meet the most.

varnish on wood

Although you could definitely call me a “Varnish expert” and even a sometimes contributor, and I do develop programs, I cannot call myself a Varnish developer because I’ve shamefully never participated in a Monday evening bug wash. My role in the Varnish world is more… operative. I am often tasked with helping ops people use Varnish correctly, justify its use and cost to their bosses, defend it from expensive and inferior competitors, sit up long nites with load tests just before launch days. I’m the guy that explains the low risk and high reward of putting Varnish in front of your critical site, and the guy that makes it actually be low risk, with long nites on load tests and I’ll be the first guy on the scene when the code has just taken a huge dump on the CEO’s new pet Jaguar. I am also sometimes the guy who tells these stories to the Varnish developers, although of course they also have other sources. The consequences of this .. lifestyle choice .. is that what code I do write is either short and to the point or .. incomplete.

bug wash

I know we all love Varnish, which is why after nearly 7 years of working with this software I’d like to share with you my pet peeves about the project. There aren’t many problems with this lovely and lean piece of software but those which are there are sharp edges that pretty much everyone snubs a toe or snags their head on. Some of them are specific to a certain version, while others are “features” present in nearly all versions.

And for you Varnish devs who will surely read this, I love you all. I write this critique of the software you contribute to, knowing full well that I haven’t filed bug reports on any of these issues and therefore I too am guilty in contributing to the problem and not the solution. I aim to change that starting now :-) Also, I know that some of these issues are better lived with than fixed, the medicine being more hazardous than the disease, so take this as all good cooking; with a grain of salt.

Silent error messages in init scripts

Some genious keeps inserting 1>/dev/null 2>&1 into the startup scripts on most Linux distros. This might be in line with some wacko distro policy but makes conf errors and in particular VCL errors way harder to debug for the common man. Even worse, the `service varnish reload` script called `varnish-vcl-reload -q`, that’s q for please-silence-my-fatal-conf-mistakes, and the best way to fix this is to *edit the init script and remove the offender*. Mind your p’s and q’s eh, it makes me sad every time, but where do I file this particular bug report?

silent but deadly still not adequately documented

People go YEARS using Varnish without discovering watch varnishadm Not to mention that it’s anyone’s guess this has to do with probes, and that there are no other debug.* parameters, except for the totally unrelated debug parameter. Perhaps this was decided to be dev-internal at some point, but the probe status is actually really useful in precisely this form. is still absent from the list and the man pages, while in 4.0 some probe status and backend info has been put into varnishstat, which I am sure to be not the only one being verry thankful for indeed.

Bad naming

Designing a language is tricky.


Explaining why purge is now ban and what is now purge is something else is mindboggling. This issue will be fixed in 10 years when people are no longer running varnish 2.1 anywhere. Explaining all the three-letter acronyms that start with V is just a gas.
Showing someone ban("req.url = "+ req.url) for the first time is bound to make them go “oh” like a racoon just caught sneaking through your garbage.
Grace and Saint mode… that’s biblical, man. Understanding what it does and how to demonstrate the functionality is still for Advanced Users, explaining this to noobs is downright futile, and I am still unsure whether we wouldn’t all be better off for just enabling it by default and forgetting about it.
I suppose if you’re going to be awesome at architecting and writing software, it’s going to get in the way of coming up with really awesome names for things, and I’m actually happy that’s still the way they prioritize what gets done first.

Only for people who grok regex

Sometimes you’ll meet Varnish users who do code but just don’t grok regex. It’s weak, I know, but this language isn’t for them.

Uncertain current working directory

This is a problem on some rigs which have VCL code in stacked layers, or really anywhere where it’s more appropriate to call the VCL a Varnish program, as in “a program written for the Varnish runtime”, rather than simply a configuration for Varnish.

UncertantyYou’ll typically want to organize your VCL in such a way that each VCL is standalone with if-wrappend rules and they’re all included from one main vcl file, stacking all the vcl_recv’s and vcl_fetches .

Because distros don’t agree on where to put varnishd’s current working directory, which happens to be where it’s been launched from, instead of always chdir $(basename $CURRENT_VCL_FILE), you can’t reliably specify include statements with relative paths. This forces us to use hardcoded absolute paths in includes, which is neither pretty nor portable.

Missing default director in 4.0

When translating VCL to 4.0 there is no longer any language for director definitions, which means they are done in vcl_init(), which means your default backend is no longer the director you specified at the top, which means you’ll have to rewrite some logic lest it bite you in the ass.

director.backend() is without string representation, instead of backend_hint,
so cannot do old style name comparisons, ie backends are first-class objects but directors are another class of objects.

the missing director

VCL doesn’t allow unused backends or probes

Adding and removing backends is a routine ordeal in Varnish.
Quite often you’ll find it useful to keep backup backends around that aren’t enabled, either as manual failover backups, because you’re testing something or just because you’re doing something funky. Unfortunately, the VCC is a strict and harsh mistress on this matter: you are forced to comment out or delete unused backends :-(

Workarounds include using the backends inside some dead code or constructs like

	set req.backend_hint = unused;
	set req.backend_hint = default;

It’s impossible to determine how many bugs this error message has avoided by letting you know that backend you just added, er yes that one isn’t in use sir, but you can definitely count the number of Varnish users inconvenienced by having to “comment out that backend they just temporarily removed from the request flow”.

I am sure it is wise to warn about this, but couldn’t it have been just that, a warning? Well, I guess maybe not, considering distro packaging is silencing error messages in init and reload scripts..

To be fair, this is now configurable in Varnish 4 by setting vcc_err_unref to false, but couldn’t this be the default?

saintmode_threshold default considered harmful


If many different URLs keep returning bad data or error codes, you might concievably want the whole backend to be declared sick instead of growing some huge list of sick urls for this backend. What if I told you your developers just deployed an application which generates 50x error codes triggering your saintmode for an infinite amount of URLs? Well, then you have just DoSed yourself because you hit this threshold. I usually enable saintmode only after giving my clients a big fat warning about this one, because quite frankly this easily comes straight out of left field every time. Either saintmode is off, or the treshold is Really Large™ or even ∞, and in only some special cases do you actually want this set to an actual number.

Then again, maybe it is just my clients and the wacky applications they put behind Varnish.

What is graceful about the saint in V4?

While we are on the subject, grace mode being the most often misunderstood feature of Varnish, the thing has changed so radically in Varnish 4 that it is no longer recognizable by users, and they often make completely reasonable but devestating mistakes trying to predict its behavior.

To be clear on what has happened: saint mode is deprecated as a core feature in V4.0, while the new architecture now allows a type of “stale-while-revalidate” logic. A saintmode vmod is slated for Varnish 4.1.

But as of 4.0, say you have a bunch of requests hitting a slow backend. They’ll all queue up while we fetch a new one, right? Well yes, and then they all error out when that request times out, or if the backend fetch errors out. That sucks. So lets turn on grace mode, and get “stale-while-revalidate” and even “stale-if-error” logic, right? And send If-Modified-Since headers too, sweet as.

Now that’s gonna work when the request times out, but you might be surprised that it does not when the request errors out with 50x errors. Since beresp.saint_mode isn’t a thing anymore in V4, those error codes are actually going to knock the old object outta cache and each request is going to break your precious stale-while-error until the backend probe declares the backend sick and your requests become grace candidates.

Ouch, you didn’t mean for it to do that, did you?

The Saint

And if, gods forbid, your apphost returns 404′s when some backend app is not resolving, bam you are in a cascading hell fan fantasy.

What did you want it to do, behave sanely? A backend response always replaces another backend response for the same URL – not counting vary-headers. To get a poor mans saint mode back in Varnish 4.0, you’ll have to return (abandon) those erroneous backend responses.

Evil grace on unloved objects

For frequently accessed URLs grace is fantastic, and will save you loads of grief, and those objects could have large grace times. However, rarely accessed URLs suffer a big penalty under grace, especially when they are dynamic and ment to be updated from backend. If that URL is meant to be refreshed from backend every hour, and Varnish sees many hours between each access, it’s going to serve up that many-hour-old stale object while it revalidates its cache.

stale while revalidate
This diagram might help you understand what happens in the “200 OK” and “50x error” cases of graceful request flow through Varnish 4.0.

Language breaks on major versions

This is a funny one because the first major language break I remember was the one that I caused myself. We were making security.vcl and I was translating rules from mod_security and having trouble with it because Varnish used POSIX regexes at the time, and I was writing this really godaweful script to translate PCRE into POSIX when Kristian who conceived of security.vcl went to Tollef, who were both working in the same department at the time, and asked in his classical broker-no-argument kind of way "why don’t we just support Perl regexes?".
Needless to say, (?i) spent a full 12 months afterwards cursing myself while rewriting tons of nasty client VCL code from POSIX to PCRE and fixing occasional site-devestating bugs related to case-sensitivity.

Of course, Varnish is all the better for the change, and would get no where fast if the devs were to hang on to legacy, but there is a lesson in here somewhere.


So what's a couple of sed 's/req.method/req.request/'s every now and again?
This is actually the main reason I created the VCL.BNF. For one, it got the devs thinking about the grammar itself as an actual thing (which may or may not have resulted in the cleanups that make VCL a very regular and clean language today), but my intent was to write a parser that could parse any version of VCL and spit out any other version of VCL, optionally pruning and pretty-printing of course. That is still really high on my todo list. Funny how my clients will book all my time to convert their code for days but will not spend a dime on me writing code that would basically make the conversion free and painless for everyone forever.

Indeed, most of these issues are really hard to predict consequences of implementation decisions, and I am unsure whether it would be possible to predict these consequences without actually getting snagged by the issues in the first place. So again: varnish devs, I love you, what are your pet peeves? Varnish users, what are your pet peeves?

August 11, 2015

Per BuerConditional requests versus cache invalidation

If your content ever changes you’ll need some way to make sure the updated content reaches the users. The traditional way of doing this is to devise some sort of cache invalidation.

August 05, 2015

Per BuerOrigin protection with Varnish Cache

Varnish Cache is versatile. To date we’ve seen it utilized as a website cache, API gateway/manager, API cache, CDN reverse proxy and a few others.

July 23, 2015

Per BuerRatelimiting the varnishlog

Varnish is typically very busy, running several thousands of transactions per second. Combine this with the rather extreme verbosity of varnishlog, and you have a firehose of information that can be rather hard to manage.

July 01, 2015

Per BuerProper sticky session load balancing in Varnish

Sometimes your web application needs to maintain state per session. This can cause problems when you are using a load balancer such as Varnish Cache. In order to mitigate this we need to make sure Varnish is fully aware of what is going on and that the sessions stick to the client. In practice, it means we need to make sure that a returning visitor gets the same application server every time.

June 26, 2015

Ingvar Hagelundhitch-1.0.0-beta for Fedora and EPEL

The Varnish project has a new little free software baby arriving soon: Hitch, a scalable TLS proxy. It will also be made available with support by Varnish Software as part of their Varnish Plus product.

A bit of background:

Varnish is a high-performance HTTP accelerator, widely used over the Internet. To use varnish with https, it is often fronted by other general http/proxy servers like nginx or apache, though a more specific proxy-only high-performance tool would be preferable. So they looked at stud.

hitch is a fork of stud. The fork is maintained by the Varnish development team, as stud seems abandoned by its creators, after the project was taken over by Google, with no new commits after 2012.

I wrapped hitch for fedora, epel6 and epel7, and submitted them for Fedora and EPEL. Please test the latest builds and add feedback: . The default config is for a single instance of hitch.

The package has been reviewed and was recently accepted into Fedora and EPEL (bz #1235305). Update august 2015: Packages are pushed for testing. They will trickle down to stable eventually.

Note that there also exists as a fedora package of the (old) version of stud. If you use stud on fedora and want to test hitch, the two packages may coexist, and should be able to install in parallel.

To test hitch in front of varnish, in front of apache, you may do something like this (tested on el7):

  • Install varnish, httpd and hitch
      sudo yum install httpd varnish
      sudo yum --enablerepo=epel-testing install hitch || sudo yum --enablerepo=updates-testing install hitch
  • Start apache
      sudo systemctl start httpd.service
  • Edit the varnish config to point to the local httpd, that is, change the default backend definition in /etc/varnish/default.vcl , like this:
      backend default {
        .host = "";
        .port = "80";
  • Start varnish
      sudo systemctl start varnish.service
  • Add an ssl certificate to the hitch config. For a dummy certificate,
    the certificate from the hitch source may be used:

      sudo cp /etc/pki/tls/private/
  • Edit /etc/hitch/hitch.conf. Change the pem-file option to use that cert
      pem-file = "/etc/pki/tls/private/"
  • Start hitch
      sudo systemctl start hitch.service
  • Open your local firewall if necessary, by something like this:
      sudo firewall-cmd --zone=public --add-port=8443/tcp
  • Point your web browser to https://localhost:8443/ . You should be greeted with a warning about a non-official certificate. Past that, you will get the apache frontpage through varnish and hitch.

    Enjoy, and let me hear about any interesting test results.


    Varnish Cache is powerful and feature rich front side web cache. It is also very fast, that is, Fast as in on steroids, and powered by The Dark Side of the Force.

    Redpill Linpro is the market leader for professional Open Source and Free Software solutions in the Nordics, though we have customers from all over. For professional managed services, all the way from small web apps, to massive IPv4/IPv6 multi data center media hosting, and everything through container solutions, in-house, cloud, and data center, contact us at

  • June 09, 2015

    Per BuerHow to implement SSL/TLS in Varnish Plus?

    A couple of weeks back we shared that we’ll be adding SSL/TLS support in Varnish Plus. Now that the announcement is out and we’ve presented it on a couple of occasions it is time to go through implementation details.

    May 15, 2015

    Lasse KarstensenIntroducing hitch – a scalable TLS terminating proxy.

    The last couple of weeks we’ve been pretty busy making SSL/TLS support for Varnish Cache Plus 4. Now that the news is out, I can follow up with some notes here.

    The setup will be a TLS terminating proxy in front, speaking PROXY protocol to Varnish. Backend/origin support for SSL/TLS has been added, so VCP can now talk encrypted to your backends.

    On the client-facing side we are forking the abandoned TLS proxy called stud, and giving a new name: hitch.

    hitch will live on github as a standalone open source project, and we are happy to review patches/pull requests made by the community. Here is the source code:

    We’ve picked all the important patches from the flora of forks, and merged it all into a hopefully stable tool. Some of the new stuff includes: TLS1.1, TLS1.2, SNI, wildcard certs, multiple listening sockets. See the CHANGES.rst file updates.

    Varnish Software will provide support on it for commercial uses, under the current Varnish Plus product package.

    May 13, 2015

    Per BuerFlying pigs and SSL in Varnish Cache Plus

    After quite a bit of time and discussion, I’m very happy to announce that we’ve finished implementing SSL in Varnish Cache Plus. We’re currently testing and documenting the software and a release will happen at our Varnish summit in Silicon Valley in early June.

    May 09, 2015

    Hrafnhildur SmaradottirVarnish Software a Gartner Cool Vendor in Web-Scale Platforms

    We are excited to be named a Gartner "Cool Vendor" in Web-Scale Platforms, 2015. It is a true honor to be recognised by Gartner as one of the vendors out there that help drive innovation and performance in web-scaling.

    April 27, 2015

    Hrafnhildur SmaradottirAmedia on Microservices Architecture at the Varnish Summit in Oslo

    We are excited to announce a new addition to the upcoming Varnish summit in Oslo, May 7th. Norwegian Amedia, a loyal and engaged customer, will be speaking, represented by Simen Graff, the company's Head of Infrastructure and Jakob Vad Nielsen, Senior Developer.

    April 24, 2015

    Stefan CaunterRogers needs to pay to solve its CFL problem

    So, there’s an article on the Toronto Sun website. Understandably, they won’t publish my comment, which appears below. Spam and pointless bickering is fine, apparently. Sigh. Here is my take. Rogers Communications and the Jays situation in the dome is driving the debate about BMO Field. It has nothing to do with the Argos […]

    April 13, 2015

    Stefan CaunterThe unconditional interest of Leafs fans

    Leafs tickets are seen as an investment to hold, not a conditional payment on success.The Leaf team is an incredibly valuable sport property that is basically destroyed every year by the media that keep them incredibly valuable. How? The players are given exalted status based on next to nothing on an achievement scale. People get […]

    April 09, 2015

    Per BuerIntroducing the Varnish API Engine

    Over the last couple of years we’ve seen an explosion in the use of HTTP-based APIs. We’ve seen them go from being a rather slow and useless but interesting technology fifteen years ago to today's current, high performance RESTful interfaces that powers much of the web and most of the app-space. Varnish Cache has been used for HTTP-based APIs since its inception. The combination of caching, high performance and the flexibility brought by VCL makes it an ideal proxy for APIs. We’ve seen people doing rather complex protocol negotiations in VCL to do interesting things like matching frontend and backend protocols.

    March 05, 2015

    Ingvar Hagelundvarnish-4.0.3 for Fedora and EPEL

    varnish-4.0.3 was released recently. I have wrapped packages for Fedora and EPEL, and requested updates for epel7, f21 and f22. They will trickle down as stable updates within some days. I have also built packages for el6, and after som small patching, even for el5. These builds are based on the Fedora package, but should be only cosmetically different from the el6 and el7 packages available from

    Also note that Red Hat finally caught up, and imported the necessary selinux-policy changes for Varnish from fedora into el7. With selinux-policy-3.13.1-23.el7, Varnish starts fine in enforcing mode. See RHBA-2015-0458.

    My builds for el5 and el6 are available here: Note that they need other packages from EPEL to work.

    Update 1: I also provide an selinux module for those running varnish-4.0 on el6. It should work for all versions of varnish-4.0, including mine and the ones from

    Update 2: Updated builds with a patch for bugzilla ticket 1200034 are pushed for testing in f21, f22 and epel7. el5 and el6 builds are available on link above.



    Varnish Cache is powerful and feature rich front side web cache. It is also very fast, that is, Fast as in on steroids, and powered by The Dark Side of the Force.

    Redpill Linpro is the market leader for professional Open Source and Free Software solutions in the Nordics, though we have customers from all over. For professional managed services, all the way from small web apps, to massive IPv4/IPv6 multi data center media hosting, and everything through container solutions, in-house, cloud, and data center, contact us at

    March 04, 2015

    Hrafnhildur SmaradottirPoul Henning Kamp speaking at Copenhagen Summit

    With the Varnish Summit in Copenhagen only three weeks away (March 26th) we are thrilled to announce that Poul-Henning Kamp, the lead architect and developer of Varnish Cache, will be joining us there.

    February 26, 2015

    Hrafnhildur SmaradottirVarnish Summit in London sold out!

    We are looking forward to the Varnish Summit in London next week (March 5 in Tech City). And Varnish users and other enthusiasts are apparently equally excited because we are sold out and have a full waiting list!

    February 16, 2015

    Hrafnhildur SmaradottirGirl geeks meet-up at Varnish Software

    Last week we had the pleasure to co-host a Girl Geek Dinner event at our Oslo offices in collaboration with the Norwegian chapter of the event series (#ggdo).

    February 11, 2015

    Hrafnhildur SmaradottirKeen on meeting up with us in Copenhagen?

    The Varnish Software series 2015 is off to a great start and next stop is wonderful Copenhagen. To respond to last year's popularity of the series in Scandinavia we decided Copenhagen would be one of the first cities we'd visit.

    January 28, 2015

    Hrafnhildur SmaradottirLondon Tech City, here we come!

    We are thrilled to announce the venue for this year's Varnish summit in London. We've selected a beautiful penthouse in East London Tech City.

    January 19, 2015

    Lasse KarstensenPROXY protocol in Varnish

    Dag has been working implementing support for HAProxy’s PROXY protocol[1] in Varnish. This is a protocol adds a small header on each incoming TCP connection that describes who the real client is, added by (for example) an SSL terminating process. (since srcip is the terminating proxy)

    We’re aiming for merging this into Varnish master (so perhaps in 4.1?) when it is ready.

    The code is still somewhat unfinished, timeouts are lacking and some polishing needed, but it works and can be played with in a development setup.

    Code can be found here:

    I think Dag is using haproxy to test it with. I’ve run it with stunnel (some connection:close issues to figure out still), and I’d love if someone could test it with ELB, stud or other PROXY implementations.


    January 08, 2015

    Ingvar Hagelundrpm packages of vmod-ipcast

    Still on varnish-3.0? Missing the ability to filter X-Forwarded-For through ACLs? Use vmod ipcast by Lasse Karstensen.

    I cleaned up and rolled an rpm package of vmod-ipcast-1.2 for varnish-3.0.6 on el6. It’s available here:

    Note that the usage has changed a bit since the last version. You are now longer permitted to change client.ip (and that’s probably a good thing). Now it’s called like this, returning an IP address object:


    If the string does not resemble an IP address, the fallback ip is returned. Note that if the fallback ip is an unvalid address, varnishd will crash!

    So, if you want to filter X-Forwarded-For through an ACL, you would something like this:

    import ipcast;
    sub vcl_recv {
       # Add some code to sanitize X-Forwarded-For above here, so it resembles one single IP address
       if ( ipcast.ip(req.http.X-Forwarded-For, "") ~ someacl ) {
         # Do something special

    And that’s all for today.

    Varnish Cache is powerful and feature rich front side web cache. It is also very fast, that is, Fast as in on steroids, and powered by The Dark Side of the Force.

    Redpill Linpro is the market leader for professional Open Source and Free Software solutions in the Nordics, though we have customers from all over. For professional managed services, all the way from small web apps, to massive IPv4/IPv6 multi data center media hosting, and everything through container solutions, in-house, cloud, and data center, contact us at

    November 27, 2014

    Per BuerLinux vm tunables

    There are quite a few tunables in the Linux kernel. Reading the documentation it is clear that quite a few of them could have an impact on how Varnish performs. One that caught my attention is dirty_background_writeback tunable. It allows you to set a limit for how much of the page cache would be dirty, i.e. contain data not yet written to disk, before the kernel will start writing it out.

    Per BuerIntroducing Varnish High Availability

    There is a reason why people install Varnish Cache in their servers. It’s all about the performance. Delivering content from Varnish is often a thousand times faster than delivering it from your web server. As your website grows, and it usually grows significantly if your cache hit rates are high, you’ll rely more and more on Varnish to deliver the brunt of the requests that are continuously flowing into your infrastructure.

    November 24, 2014

    Per BuerIntroducing Varnish High Availability

    There is a reason why people install Varnish Cache in their servers. It’s all about the performance. Delivering content from Varnish is often a thousand times faster than delivering it from your web server. As your website grows, and it usually grows significantly if your cache hit rates are high, you’ll rely more and more on Varnish to deliver the brunt of the requests that are continuously flowing into your infrastructure.

    Per BuerIntroducing Varnish High Availability

    There is a reason why people install Varnish Cache in their servers. It’s all about the performance. Delivering content from Varnish is often a thousand times faster than delivering it from your web server. As your website grows, and it usually grows significantly if your cache hit rates are high, you’ll rely more and more on Varnish to deliver the brunt of the requests that are continuously flowing into your infrastructure.

    Per BuerIntroducing Varnish High Availability

    There is a reason why people install Varnish Cache in their servers. It’s all about the performance. Delivering content from Varnish is often a thousand times faster than delivering it from your web server. As your website grows, and it usually grows significantly if your cache hit rates are high, you’ll rely more and more on Varnish to deliver the brunt of the requests that are continuously flowing into your infrastructure.

    Per BuerIntroducing Varnish High Availability

    There is a reason why people install Varnish Cache in their servers. It’s all about the performance. Delivering content from Varnish is often a thousand times faster than delivering it from your web server. As your website grows, and it usually grows significantly if your cache hit rates are high, you’ll rely more and more on Varnish to deliver the brunt of the requests that are continuously flowing into your infrastructure.

    November 19, 2014

    Per BuerIntroducing Varnish Massive Storage Engine

    Varnish was initially made for web site acceleration. We started out using a memory mapped file to store objects in. It had some problems associated with it and was replaced with a storage engine that relied on malloc to store content. While it usually performed better than the memory mapped files performance suffered as the content grew past the limitations imposed by physical memory.

    November 10, 2014

    Per BuerScaling up the dmcache tests

    Following up on the test of dmcache we decided to scale it up a bit, to get better proportions between RAM, SSD and HDD. So we took the dataset and made it 10 times bigger. It is now somewhere around 30GB with an average object size of 800Kbyte. In addition we made the backing store for Varnish ten times biggers as well, increasing it from 2GB to 20GB. This way we retain the cache hit rate of around 84% but we change the internal cache hit rates significantly. The results where pretty amazing and shows what a powerful addition dmcache can be for IO intensive workloads.

    November 07, 2014

    Per BuerAccelerating your HDD with dm-cache or bcache

    As you might or might not know we’ve been working on this storage backend for a year now, built for handling large data volumes like the ones we see in online video and CDNs. The new storage backend is written with performance in mind leveraging some novel ideas we have to make things go a lot faster. If you want to know more you can get in touch with me or come to one of our summits where I’ll be presenting. Since the new storage engine relies much more on IO capacity we’re suddenly looking at things such as filesystems and IO performance. We expect most of the deployments of this software to take place on solid state drives. However, if you are going to cache up a petabyte sized video library, having a secondary cache level with some SATA-based disk cabinets might make a lot of sense. The problem with SATA based storage is that the IO capacity is abysmal compared to solid state drives. So, we want as much memory as possible to achieve a reasonably high hit rate for the Linux page cache. However, if you attach 20 terabyte of SATA storage 384GB of RAM is still a bit less than I would like to see in order to cache this. So, how do we cache 20TB in a reasonably cost-effective way.

    October 13, 2014

    Lasse KarstensenVarnish VMOD static code analysis

    I recently went looking for something similar to pep8/pylint when writing Varnish VMODs, and ended up with OCLint.

    I can’t really speak to how good it is, but it catches the basic stuff I was interested in.

    The documentation is mostly for cmake, so I’ll give a small tutorial for automake:

  • (download+install oclint to somewhere in $PATH)
  • apt-get install bear
  • cd libvmod-xxx
  • ./; ./configure –prefix=/usr
  • bear make # “build ear” == bear. writes compile_commands.json
  • cd src
  • oclint libvmod-xxx.c # profit
  • Which will tell you about unused variables, useless parentheses, dead code and so on.

    October 03, 2014

    Lasse KarstensenAnnouncing libvmod-tcp: Adjust Varnish congestion control algorithm.

    I’ve uploaded my new TCP VMOD for Varnish 4 to github, you can find it here:

    This VMOD allows you to get the estimated client socket round trip time, and then let you change the TCP connection’s congestion control algorithm if you’re so inclined.

    Research[tm][0] says that Hybla is better for long high latency links, so currently that is what it is used for.

    Here is a quick VCL example:

    if (tcp.get_estimated_rtt() > 300) {
    set req.http.x-tcp = tcp.congestion_algorithm("hybla");

    One thing to note is that VCL handling is very early in the TCP connection lifetime. We’ve only just read and acked the HTTP request. The readings may be off, I’m analyzing this currently.
    (As I understand it the Linux kernel will keep per-ip statistics, so for subsequent requests this should get better and better..)

    0: Esterhuizen, A., and A. E. Krzesinski. “TCP Congestion Control Comparison.” (2012).

    October 02, 2014

    Per BuerMajor new product releases scheduled for the Varnish Summits

    Announcement: For the last year we’ve been doing quite a bit of development which we’re finally ready to present. We have three major product updates that we’re about to launch. In the upcoming Summits we’ll be announcing these three updates.

    September 30, 2014

    Lasse KarstensenFresh Varnish packages for Debian/Ubuntu and Redhat systems

    We use continuous integration when developing Varnish Cache. This means that we run our internal test suite (varnishtest) on all commits, so we catch our mistakes earlier.

    This pipeline of build jobs sometimes end up with binary packages of Varnish, which may be useful to people when they know they exist. They may not be the easiest to find, which this blog post tries to remedy.

    Development wise, Varnish Cache is developed with GIT with a master branch for development and a set of production branches, currently 3.0 and 4.0.

    Unreleased packages for Varnish master can be found here:

    Unreleased packages of Varnish 4.0 can be found here:

    (There is also a set of 3.0 jobs, but you should really go for 4.0 these days.)

    The latest commits in each of the production branches may contain fixes we’ve added after the last production release, but haven’t cut a formal release for yet. (For example there are some gzip fixes in the 3.0 branch awaiting a 3.0.6 release, which I really should get out soon.)

    Some jobs in the job listing just check that Varnish builds, without creating any output (or artifacts as Jenkins calls it.) This applies for any jobs with “-build-” in the name, for example varnish-4.0-build-el7-x86_64 and varnish-4.0-build-freebsd10-amd64.

    The Debian and Ubuntu packages are all built from one job currently, called varnish-VERSION-deb-debian-wheezy-amd64. Press “Expand all” under artifacts to get the full list.

    Redhat/RHEL packages are built in the different el5/el6/el7 jobs.

    The unreleased packages built for 3.0 and 4.0 are safe. This is the process used to build the officially released packages, just a step earlier in the process. The varnish-master packages are of course failing from time to time, but that is to be expected.

    The version numbers in the packages produced may be a bit strange, but that is what you get with unreleased software builds.

    I’m happy to improve this process and system if it can help you run never versions of Varnish, comments (either here or on IRC) are appreciated.

    September 25, 2014

    Ruben RomeroVarnish Developer News: VDD14Q3 @ in Oslo, Norway

    Once every quarter there is a VDD - Varnish Developer Day. We typically gather between 8-18 people that are interested in Varnish development and discuss Varnish development. unless you want to use your day in Varnish packaging, internals or VCL design, you are better off just reading this post and/or the notes from the meeting :-)The last VDD was held in Oslo, in the offices of and Schibsted at Akersgata. After that we went to a DevOps meetup and finished the evening going for some food and drinks.

    September 16, 2014

    Espen BraastadBlog for a Sysadmin - Monitoring Health in Varnish Cache

    At Varnish Software, we like to share tips and tricks and ensure our knowledge is being shared with our readers. In what I hope will become a series under the guise of 'Blog for a Sysadmin', I'd like to take you through the essentials of maintaining your Varnish Cache setup. First up—Monitoring your Varnish Cache setup.

    September 10, 2014

    Per BuerA message for North America from Per Buer, Founder and CTO

    Announcement: Now well known amongst the Varnish Cache and tech community, we’ve got eyes for North America, namely in hotbeds like Silicon Valley (surprise, surprise). So, what better way to show we’re serious than hosting a Varnish Summit in San Francisco to present and discuss all things Varnish. Learn from the masters on 4 December.

    September 01, 2014

    Ruben RomeroNext Varnish Core Developer meeting to happen in Oslo

    Just a reminder that on September 17th the developer team will be gathering in Oslo for their quarterly meeting.

    August 21, 2014

    Espen Braastad10 Varnish Cache mistakes and how to avoid them

    Varnish Cache is one of the world’s most popular HTTP caching solutions. Like any piece of critical web content delivery software, it pays to know some fundamentals and have a few tricks up your sleeve in order to navigate your optimum setup and ensure things are running smoothly when you need it most. We’ve compiled two handfuls of common mistakes Varnish Cache users can run into. If you have more you’d like to suggest, we’d love to write about them. Simply comment below or email

    June 26, 2014

    Per BuerStill using the GET and HEAD commands? Meet httpie.

    I've been using libwww-perl's GET, POST and HEAD commands for the last ten years at least. Typing "GET -Used" is more or less muscle memory by now and my attempts and getting friendly with the curl command line has never bore fruit. libwww-perl is pretty ancient and it has some annoying snags and those of my friend who are using curl are not raving about their experiences either.  Thanks to a post on hacker news a couple of week back I found something new and it is has changed a very, very, very small part of my life. Somewhat.

    June 25, 2014

    Per BuerPartitioning your Varnish Cache

    If you have multiple virtual hosts being handled by the same server you some times want some sort of QoS to apply to your caching. You might want to reserve a certain amount of memory for each virtual hosts. Let me show you how it can be done.

    June 24, 2014

    Per BuerNotes from the last Varnish Developer Day

    Once every quarter there is a VDD - Varnish Developer Day. We typically gather between 8-16 people that are interested in Varnish development and discuss where we are and where we wanna go.  The last VDD was held in Stockholm, in the offices of Redpill Linpro in Solna. Roughly the outcome was something like this.

    June 23, 2014

    Per BuerAdding headers to gain insight into VCL

    Varnish logs a lot. Sometimes it is a bit too much and verifying that your VCL works the way it is supposed to can be a bit of a bother, especially on a busy server. The new logging in Varnish 4  help a lot, but a lot of the time its easier to just a add a header or two to indicate what is happening.

    June 20, 2014

    Per BuerGrace in Varnish 4 - Stale-while-revalidate semantics in Varnish

    Grace mode has since its inception in Varnish 2.1 been a key feature in Varnish. Initially a feature to mitigate thread pile-ups and the resulting thundering herd was soon adopted to other uses, mainly having Varnish to continue to serve requests when the backend got in trouble. As part of the rework done with the threading model in Varnish 4.0 things changed and grace got somewhat changed semantics. The main change being Varnish is now capable of delivering a stale object and issue an asynchronous refresh request, thereby removing the penalty the first user gets when hitting a stale object.

    June 10, 2014

    Espen BraastadIntroducing the Varnish Tuner

    Varnish Cache is a high performance web application accelerator that performs very good out of the box. The default parameters have evolved since the start of the project, and they fit fine for a small site or a developer installation. However in order to run Varnish at scale you might want to tune it a bit to make it perform and scale even better.

    June 03, 2014

    Lasse KarstensenWhat happened to ban.url in Varnish 4.0?

    tl;dr; when using Varnish 4 and bans via varnishadm, instead of “ban.url EXPRESSION”, use “ban req.url ~ EXPRESSION”.

    In Varnish 3.0 we had the ban.url command in the varnishadm CLI. This was a shortcut function expanding to the a bit cryptic (but powerful) ban command. In essence ban.url just took your expression, prefixed it with “req.url ~ ” and fed it to ban. No magic.

    We deprecated this in Varnish 4.0, and now everyone has to update their CMS’s plugin for cache  invalidation. Hence this blog post. Perhaps it will help. Perhaps not. :-)

    Some references:

    May 16, 2014

    Per BuerGetting virtual hosts right with Varnish Cache.

    When answering questions in the forums, the mailing lists or our support, one of the most common topics to come up is virtual host. Virtual hosts are tricky and with Varnish and Apache/Nginx it is common to misconfigure it. Here I’ll explain how they actually work, how to verify it is working and how to set it up.

    April 16, 2014

    Per BuerThread organization in Varnish Cache 4.0

    One of the two biggest changes in Varnish 4.0 is how the threads work. Varnish uses threads for doing all the heavy lifting and it seems to be working out quite well. In Varnish 3.0 one thread would service each client, doing whatever that client wanted it to do. Within reason, obviously. These are very decent threads. The thread would deliver from cache, fetch content from the backend, pipe, etc.

    April 14, 2014

    Yves HwangFrontend/Senior Software Developer wanted in our London or Oslo office

    Exciting open source company seeks Frontend/Senior Software Developer in our London or Oslo office.

    April 10, 2014

    Hrafnhildur SmaradottirVarnish Cache 4.0 is released!

    Varnish Cache 4.0 is out! The Varnish team is thrilled and has just celebrated with a sip of bubbly. We are happy to announce this release to our community and the world. It includes some awesome enhancements for all of you to enjoy!

    April 09, 2014

    Ruben RomeroVarnish 4.0 Q&A on Performance, VMODs, SSL, IMS, SWR and more...

    During our "What's coming in Varnish 4.0?" Hangout (see the video now) two weeks ago we got some questions. I am getting back to you with some answers.

    MacYvesBuilding vagent2 for Varnish Cache 4.0.0 beta 1 for OS X 10.9.2

    For those keen bunnies that wishes to jump in and help us test out varnish cache 4.0.0 Beta 1 with varnish-agent 2, here’s how you do it on OS X 10.9.2 Mavericks.


    Homebrew dependencies

    Install the following with Homebrew

    • automake 1.14.1
    • libtool 2.4.2
    • pkg-config 0.28
    • pcre 8.34
    • libmicrohttpd 0.9.34

    Build varnish cache 4.0.0 beta 1

    1. Download and extract varnish cache 4
    2. run ./
    3. run ./configure
    4. make

    Build varnish-agent2 for varnish cache 4.0.0 beta 1

    1. Clone varnish-agent from repo
    2. Checkout the varnish-4.0-experimental branch
    3. export VARNISHAPI_CFLAGS=-I/tmp/varnish/varnish-4.0.0-beta1/include
    4. export VARNISHAPI_LIBS="-L/tmp/varnish/varnish-4.0.0-beta1/lib/libvarnishapi/.libs -lvarnishapi"
    5. run ./
    6. run ./configure
    7. make

    Note that if you run make install for varnish cache 4 or varnish-agent, it would then install it for you respectively.

    Per BuerLogging in Varnish 4.0

    The logging in Varnish Cache is one of the unique features of Varnish that in my mind sets it apart from the rest of the software world. It combines both logging in great detail with both performance, manageability, sensible privacy defaults and access to great debugging help. The shared memory log, logs everything that happens without the need to adjust log levels or without significantly affecting performance, eliminating the need for turning on any debug switch. Many of you have surely been in a situation where you have an application that is misbehaving. Then you increase the log level to debug and the problem magically disappears - obviously caused by the application slowing down just enough that the race condition you encountered earlier is now gone. The weakness of logging everything is that so much information is available that the administrator can sometimes be overwhelmed by all the information presented. It’s a figurative firehose of information and drinking from it can be painfull.  Martin has reimplemented a new logging framework in Varnish Cache 4.0. Out of all the new stuff in Varnish Cache 4.0 this might be the most significant one. It’s also the most complex feature, requiring quite a bit of time to fully understand how it works.

    Per BuerLocking down Varnish 4.0 - high security installations of Varnish

    Varnish Cache 4.0 is just around the corner. We did a beta release a couple of weeks back and the feedback has been pretty good and I think the release will be out later this week. Anyways, there are a lot of canges in Varnish 4.0.  A couple of the changes in Varnish Cache 4.0 are security related ones, giving you the option to lock down a Varnish installation to make it ultra secure.

    April 01, 2014

    Per BuerVarnish Cache 4.0 beta 1 is out

    The beta is out. Varnish Cache 4.0 is just around the corner and we need a bit of help to cross the finishing line. The beta seems to be pretty solid, we've given it a fair beating in out testing facilities as well as some pretty rough production workload. The final production tests last week where done in cooperation with A-Media and Redpill Linpro and where quite encouraging. Varnish 4.0 ran stable on their workload happily chugging away 1300 requests a second. If you have the option of giving Varnish Cache 4.0 a spin on your website, or giving it parts of your traffic we would be grateful. Crashes are very unlikely at this point, but just to be safe, we would recommend having a load balancer in front of the beta so you can gradually increase the load. If you need any help with converting the VCL, please let us know.

    March 28, 2014

    Per BuerWhen communities feel betrayed

    This week the Internet has been all up in arms about Face acquiring Oculus Rift. Many people have been angry about Facebook buying this company, something we did not see when Facebook acquired WhatsApp or Instagram.

    March 25, 2014

    Yves HwangVCL change management and contiguous integration with your Varnish using VAC API and Git

    Managing multiple Varnish instances and their respective VCL is made significantly easier with Varnish Administration Console (VAC) through its API. This blog post aims to illustrate an example of VCL change management and continuous integration with multiple Varnish instances using VAC API and a little magic from Git.

    March 24, 2014

    Ruben RomeroJoin our Varnish 4.0 Hangout on Wednesday

    Two days to go for our Varnish 4.0 Hangout with the Varnish Core Developer team. Make sure you join us. RSVP now!