A distributed design challenge

I have a distributed design problem that I thought Monks might find interesting and perhaps be willing to offer suggestions on.

The Setup

I'm writing real time bidding software. It works like this:

  1. My software receives a request to bid on an auction
  2. I return a bid on said request (I must respond in 85 to 100 milliseconds)
  3. Later, I receive a notification on whether or not I won the auction

Point number 3 above means I don't find out if I've won a request until shortly after I've made the request.

Currently, on a server farm of about 36 boxes, we're receiving roughly 400 bid requests a second, or about 11 requests/second per box. This is an extremely light load and will likely increase by one or two orders of magnitude when the system goes live.

We have various campaigns, each with individual daily and total budgets that cannot be overspent (a bit of wiggle room is allowed). The servers run separate databases and the only shared data source we have is a Redis server with two slaves, though we could implement others.

So that's the setup. Now on to the problem.

The Problem

(The monetary numbers are fictional)

Several times we've been close to our budget and we've had 16 servers respond simultaneously with winning bids, pushing us further over our budget than we would like. We keep track of our spending in Redis, so if all 36 servers respond to a bid at the same time, they might see that they've spent $99 (at a budget of $100), but not realize that all of them winning will push us to $135, or 35% over our limit.

In other words, it's impossible for the servers to know that other servers are bidding at the same time, or whether or not they'll win.

Solutions

One naive strategy is to have each server register with Redis and receive a unique, sequential number. Each server, when receiving a bid request, will do a modulus operation against a time segment to decide if it should bid or not. This requires that all 36 servers have their time synchronized (not too big of a deal), but effectively reduces our throughput capacity to 1/36th of what we would like.

Further, we have multiple campaigns. We might want to have server/campaign combinations, so if server 1 can bid on campaign A right now, then maybe server 2 is bidding on campaign B, thus ensuring that servers are not running idle (I can't tell you the number of campaigns, but there are many).

But then what happens if we get an important, but low-volume campaign where we get a bid request every five minutes or so? It's very conceivable that it will rarely, if ever, hit the server which is allowed to "handle" that campaign at the time. Further, calculating how often a particular campaign comes in is very hard (perhaps not NP hard, but still hard).

Does anyone have experience here? Or do you have thoughts on how to approach this? Writing everything to a central RDBMS could cause serious contention issues as we'd use transactions to lock critical data. Any solution that increases response times by more than a few milliseconds is likely to be problematic.

7 Comments

Idea based on 5 minutes of thinking about it (so, yeah, go fetch the box o' salt).

Since it seems there is going to be a truckload of bids, it might be advantageous to use the law of big numbers and, instead of keeping tabs on the exact number of bids won to make your next decisions, go with estimations based on previous observations. If the wins are independent and, say you know you win 70% of them, then it's fair to assume that if there are 1,000 bids active at the moment, 700 will be won, +- a computable safety margin.

When a request comes in for a bid is it already targeting a specific campaign? If bid requests are campaign specific, how are they reaching the servers, can they be steered to the server responsible for bidding for that campaign? Basically sharding campaigns across the bid servers.

Same caveat as yanick, except I didn't spend anywhere near 5 minutes on this!

Is it feasible for everyone to know what percent of the budget is available? If so, slow down bidding on a campaign as budget limit is approached. Experience will dictate rate of slowdown needed. Slowing down can be accomplished by reducing the probability of a server responding to a bid, or by lowering the bid resulting in fewer wins (and at a better price.) Goal is that budget limit would be asymptotically approached. Low volume campaigns should be fine. Probably will not work well if bids are large compared to the size of the budget.

I may be just restating with Bill and Yanick already said in a dumber way, but how I'd handle it would be to keep a running total of all outstanding bids using INCRBY in a red is key of today's date. Once you get the response back about whether you win, then you can DECRBY if you lose. This way you're basically keeping transactional consistency on the running total, but without really tracking much info, and without using an RDBMS. If you've already bid up to your daily allowance then don't bid until you have money back in the key. And like Yanick said, you can quite easily set an allowed overage on the running total, assuming you're not getting all the bids.

Can't you reserve the budget for a bid whenever you make one? Then servers would decide whether to bid or not based on this available budget. If a bid isn't successful you free the reserved amount for that bid, if it is, you commit it to the spent amount.

Seems pretty straight-forward and should avoid going over-budget if you're using atomic operations.

Of course, it means that once you're near the end of your budget, bidding will naturally slow down - but that's expected.

my 2-5c

Assuming the servers are physical servers, you could move the clocking source off of the motherboard clocks and on to a single external clock for all the servers. This would be connected via serial (Symmetricom makes such a device, i dont work for them) and the servers would need to drop themselves out of the load balancer if they lose the clocking source.

With all the servers on the same clocking source, they will all have the same time exactly.

Another option is to use a 'layer7' loadbalancer, aka reverse proxy. The software isnt important, the function though is to distribute load across the servers based on auction id such that each id is serviced by only one server. Also a l7 can resend requests if the worker server fails, which an l3 (like lvs) cant do.

Yet another option may be to again use an l7 loadbalancer and have it insert timestamps. Then the worker servers use these timestamps rather than their own.

Can't you just say campaign $C is handled by server hash($C)%36, for some appropriate hash function?

Or you could throw in the amount of money left on the budget, and if $N units are left, you allow min($N, 36) servers to bid. Which means that if your budget is almost gone, just one server bids, but if hardly anything is gone, all servers are allowed to bid.

Third option, if a campaign $C has $N units left in its budget, all servers flip a 36 sided die. Any server that rolls less than $N is allowed to bid. Any server that doesn't isn't allowed to bid in this campaign again. This will mean though, that for some campaigns, you will go over the budget, while for others, you will not reach the budget.

Leave a comment

About Ovid

user-pic Freelance Perl/Testing/Agile consultant and trainer. See http://www.allaroundtheworld.fr/ for our services. If you have a problem with Perl, we will solve it for you. And don't forget to buy my book! http://www.amazon.com/Beginning-Perl-Curtis-Poe/dp/1118013840/