Quantcast
Channel: Parse Blog » Ops
Viewing all articles
Browse latest Browse all 18

dvara: A Mongo Proxy

$
0
0

We wrote dvara, a connection pooling proxy for mongo, to solve an immediate problem we were facing. We were running into the connection limits on some of our replica sets. Mongo through 2.4 had a max-max conn limit of 20,000. As the number of our application servers grew, the number of concurrent active connections to our replica sets grew. Mongo 2.6 removed this limit, but it was unfortunately not ready at that time (we’re still testing it and haven’t upgraded to it yet). Even if it were ready, the cost per connection is 1MB, which takes away precious memory otherwise used by the database. A sharded cluster with mongos as the proxy was another path we considered. Enabling sharding may have helped, but that change would spill over into our application logic and we use at least some of the restricted features. We are experimenting with sharded replica sets in our environment, and from our experience we weren’t confident they would actually help with our connection limit problem. So we set out on what seemed like an ambitious, and in my mind, a difficult goal of building a connection pooling proxy for mongod.

Down to the Wire

We started off with a simple proof of concept, working backwards from legacy wire protocol documentation. We got it far enough to serve basic read/write queries in a few weeks. We attribute the speed at which we got the prototype working to using Go to build it. Go allowed us to write easy to follow code, and yet not pay the cost of a thread per connection, or the alternative of having to write callbacks or some other form of manually managed asynchronous network IO logic. Additionally, while our proxy prefers to not look at the bytes flowing through or decode the BSON for performance reasons, Gustavo Niemeyer‘s excellent mgo driver, along with its bson library made it trivial for us to introspect and mutate the traffic we needed to. The first of these cases was the isMaster and the replSetGetStatus commands. These command return the member/host information the client uses to decide who to connect and talk to. We need to replace the real host/ports with the proxy host/ports.

Yet another command that needed special handling, and one of the known problems we had to solve was to handle the way Mongo 2.4 and earlier require a second follow up call for getLastError. Fortunately this got some much needed love in 2.6, but until 2.4 mutation operations were essentially split into two parts: first, the mutation itself; and second, the getLastError command which included some important options, including the write concern. Consider what a connection pooling proxy does: a client sends a command, we take a connection from our pool, proxy the command and the response, and put the connection back into the pool for someone else to use. A good proxy would hold a connection from the pool for the least amount of time possible. Unfortunately the design of getLastError means we can’t do that, because getLastError is state that exists in mongod per-connection. This design is awkward enough that it actually requires special logic for the mongo shell to ensure it doesn’t get inadvertently reset. It was clear we’ll need to similarly maintain this state per connection in the proxy as well. Our implementation tries to preserve the semantics mongod itself has around getLastError, though once we’ve moved all our servers and clients to 2.6 this will be unnecessary with the new wire protocol.

Proxying in Production

An aspect we refined before we started using this in production was to auto discover replica set configuration from the nodes. At first our implementation required manual configuration that mapped each node we wanted to proxy. We always need a mapping in order to alter the responses for the isMaster and replSetGetStatus responses mentioned earlier. Our current implementation automatically configures this and uses the provided member list as a seed list. We’re still improving how this works, and likely will reintroduce manual overrides to support unusual situations that often arise in real life.

One of the benefits of dvara has been the ability to get metrics about various low level operations which were not necessarily readily available to us. We track about 20 metrics including things like number of mutation operations, number of operations with responses, latency of operations, number of concurrent connections. Our current implementation is tied to Ganglia using our own go client but we’re working on making that pluggable.

We’ve been using dvara in production for some time, but we know there are mongo failure scenarios it doesn’t handle gracefully yet. We also want a better process around deploying new versions of dvara without causing disruptions to the clients (possibly using grace). We want to help improve the ecosystem around mongo, and would love for you to contribute!


Viewing all articles
Browse latest Browse all 18

Latest Images

Trending Articles





Latest Images