We recently launched Related Posts across WordPress.com, so its time to pop the hood and take a look at what ended up in our engine.
There’s a lot of good information spread across the web on how to use Elasticsearch, but I haven’t seen too many detailed discussions of what it looks like to scale an ES cluster for a large application. Being an open source company means we get to talk about these details. Keep in mind though that scaling is very dependent on the end application so not all of our methods will be applicable for you.
I’m going to spread our scaling story out over four posts:
- An overview of our current system (this post)
- Indexing
- Querying Performance
Scale
WordPress.com is in the top 20 sites on the Internet. We host very big clients all the way down the long tail where this blog resides. The primary project goal was to provide related posts on all of those page views (14 billion a month and growing). We also needed to replace our aging search engine that was running search.wordpress.com for global searches, and we have plans for many more applications in the future.
I’m only going to focus on the related posts queries (searches within a single blog) and the global queries (searches across all blogs). They illustrate some really nice features of ES.
Currently every day we average:
- 23m queries for related posts within a single shard
- 2m global queries across all shards
- 13m docs indexed
- 10m docs updated
- 2.5m docs deleted
- 250k delete by query operations
Our index has about 600 million documents in it, with 1.5 million added every day. Including replication there is about 9 TB of data.
System Overview
We mostly rely on the default ES settings, but there are a few key places where we override them. I’m going to leave the details of how we partition data to the post on indexing.
- The system is running 0.90.8.
- We have a single cluster spread across 3 data centers. There are risks with doing this. You need low latency links, and some longer timeouts to prevent communication problems within the cluster. Even with very good links between DCs local nodes are still 10x faster to reach than remote nodes:
discovery.zen.fd.ping_interval: 15s discovery.zen.fd.ping_timeout: 60s discovery.zen.fd.ping_retries: 5
This also helps prevent nodes from being booted if they experience long GC cycles.
- 10 data nodes and one dedicated master node in each DC (30 data nodes total)
- Currently 175 shards with 3 copies of each shard.
- Disable multicast, and only list the master nodes in the unicast list. Don’t waste time pinging servers that can’t be masters.
- Dedicated hardware (96GB RAM, SSDs, fast CPUs) – CPUs definitely seem to the bottleneck that drives us to add more nodes.
- ES is configured to use 30GB of the RAM with
indices.fielddata.cache.size
set to 20GB. This is still not ideal, but better than our previous setting ofindex.fielddata.cache: soft
. 1.0 is going to have an interesting new feature that applies a “circuit breaker” to try and detect and squash out of memory problems. Elasticsearch has come a long way with preventing OutOfMemory exceptions, but I still have painful memories from when we were running into them fairly often in 0.18 and 0.19. - Use shard allocation awareness to spread replicas across data centers and within data centers.
cluster.routing.allocation.awareness.attributes: dc, parity
dc is the name of the data center, parity we set to 0 or 1 based on the es host number. Even hosts are on one router and odd on another.
- We use fixed size thread pools, very deep for indexing because we don’t want to lose index operations, much shorter for search and get since if an operation has been queued that deeply the client will probably time out by the time it gets a response.
threadpool: index: type: fixed size: 30 queue_size: 1000 bulk: type: fixed size: 30 queue_size: 1000 search: type: fixed size: 100 queue_size: 200 get: type: fixed size: 100 queue_size: 200
- Elastica PHP Client. I’ll add the disclaimer that I think this client is slightly over object-oriented which makes it pretty big. We use it mostly because it has great error handling, and we mostly just set raw queries rather than building a sequence of objects. This limits how much of the code we have to load because we use some PHP autoloading magic:
//Autoloader for Elastica interface client to ElasticSearch function elastica_autoload( $class_name ) { if ( 'Elastica' === substr( $class_name, 0, strlen( 'Elastica' ) ) ) { $path = str_replace( '\\', '/', $class_name ); include_once dirname( __FILE__ ) . '/Elastica/lib/' . $path . '.php'; } } spl_autoload_register( 'elastica_autoload' );
- Custom round robin ES node selection built by extending the Elastica client. We track stats on errors, number of operations, and mark nodes as down for a few minutes if certain errors occur (eg connection errors).
- Some customizations for node recovery. See my post on speeding up restart time.
Well, that’s not so hard…
If you’ve worked with Elasticsearch a bit then most of those settings probably seem fairly expected.
Elasticsearch scaling is deceptively easy when you first start out. Before building this cluster we ran a two node cluster for over a year and were indexing a few hundred very active sites. We threw more and more data at that small cluster, and more and more queries, and the cluster hardly blinked. It has about 50 million documents on it now, and gets 1-2 million queries a day.
It is still pretty shocking to me how much harder it was to scale a 10-100x larger cluster. There is a substantial and humbling difference between running a small cluster with 10s of GB vs a cluster with 9 TB of data.
In the next part in this series I’ll talk about how we overcame the indexing problems we ran into: how our index structure scales over time, our optimizations for real time indexing, and how we handle indexing failures.
Leave a reply to bitsofinfo Cancel reply