In Chapters Three and Four we looked at advanced use of ØMQ's request-reply pattern. If you managed to digest all that, congratulations. In this chapter we'll focus on publish-subscribe, and extend ØMQ's core pub-sub pattern with higher-level patterns for performance, reliability, state distribution, and monitoring.
We'll cover:
- How to handle too-slow subscribers (the Suicidal Snail pattern).
- How to design high-speed subscribers (the Black Box pattern).
- How to build a shared key-value cache (the Clone pattern).
- How to use reactors to simplify complex servers.
- How to use the Binary Star pattern to add failover to a server.
- How to monitor a publish-subscribe network (the Espresso pattern).
A common problem you will hit when using the pub-sub pattern in real life is the slow subscriber. In an ideal world, we stream data at full speed from publishers to subscribers. In reality, subscriber applications are often written in interpreted languages, or just do a lot of work, or are just badly written, to the extent that they can't keep up with publishers.
How do we handle a slow subscriber? The ideal fix is to make the subscriber faster, but that might take work and time. Some of the classic strategies for handling a slow subscriber are:
- Queue messages on the publisher. This is what Gmail does when I don't read my email for a couple of hours. But in high-volume messaging, pushing queues upstream has the thrilling but unprofitable result of making publishers run out of memory and crash. Especially if there are lots of subscribers and it's not possible to flush to disk for performance reasons.
- Queue messages on the subscriber. This is much better, and it's what ØMQ does by default if the network can keep up with things. If anyone's going to run out of memory and crash, it'll be the subscriber rather than the publisher, which is fair. This is perfect for "peaky" streams where a subscriber can't keep up for a while, but can catch up when the stream slows down. However it's no answer to a subscriber that's simply too slow in general.
- Stop queuing new messages after a while. This is what Gmail does when my mailbox overflows its 7.554GB, no 7.555GB of space. New messages just get rejected or dropped. This is a great strategy from the perspective of the publisher, and it's what ØMQ does when the publisher sets a high water mark or HWM. However it still doesn't help us fix the slow subscriber. Now we just get gaps in our message stream.
- Punish slow subscribers with disconnect. This is what Hotmail does when I don't login for two weeks, which is why I'm on my fifteenth Hotmail account. It's a nice brutal strategy that forces subscribers to sit up and pay attention, and would be ideal, but ØMQ doesn't do this, and there's no way to layer it on top since subscribers are invisible to publisher applications.
None of these classic strategies fit. So we need to get creative. Rather than disconnect the publisher, let's convince the subscriber to kill itself. This is the Suicidal Snail pattern. When a subscriber detects that it's running too slowly (where "too slowly" is presumably a configured option that really means "so slowly that if you ever get here, shout really loudly because I need to know, so I can fix this!"), it croaks and dies.
How can a subscriber detect this? One way would be to sequence messages (number them in order), and use a HWM at the publisher. Now, if the subscriber detects a gap (i.e. the numbering isn't consecutive), it knows something is wrong. We then tune the HWM to the "croak and die if you hit this" level.
There are two problems with this solution. One, if we have many publishers, how do we sequence messages? The solution is to give each publisher a unique ID and add that to the sequencing. Second, if subscribers use ZMQ_SUBSCRIBE filters, they will get gaps by definition. Our precious sequencing will be for nothing.
Some use-cases won't use filters, and sequencing will work for them. But a more general solution is that the publisher timestamps each message. When a subscriber gets a message it checks the time, and if the difference is more than, say, one second, it does the "croak and die" thing. Possibly firing off a squawk to some operator console first.
The Suicide Snail pattern works especially when subscribers have their own clients and service-level agreements and need to guarantee certain maximum latencies. Aborting a subscriber may not seem like a constructive way to guarantee a maximum latency, but it's the assertion model. Abort today, and the problem will be fixed. Allow late data to flow downstream, and the problem may cause wider damage and take longer to appear on the radar.
So here is a minimal example of a Suicidal Snail:
"""
Suicidal Snail
Author: Min RK <moc.liamg|krnimajneb#moc.liamg|krnimajneb>
"""
import sys
import threading
import time
import random
import zmq
from zhelpers import zpipe
# ---------------------------------------------------------------------
# This is our subscriber
# It connects to the publisher and subscribes to everything. It
# sleeps for a short time between messages to simulate doing too
# much work. If a message is more than 1 second late, it croaks.
MAX_ALLOWED_DELAY = 1.0 # secs
def subscriber(pipe):
# Subscribe to everything
ctx = zmq.Context.instance()
sub = ctx.socket(zmq.SUB)
sub.setsockopt(zmq.SUBSCRIBE, '')
sub.connect("tcp://localhost:5556")
# Get and process messages
while True:
clock = float(sub.recv())
# Suicide snail logic
if (time.time() - clock > MAX_ALLOWED_DELAY):
print >> sys.stderr, "E: subscriber cannot keep up, aborting\n",
break
# Work for 1 msec plus some random additional time
time.sleep(1e-3 * (1+2*random.random()))
pipe.send("gone and died")
# ---------------------------------------------------------------------
# This is our server task
# It publishes a time-stamped message to its pub socket every 1ms.
def publisher(pipe):
# Prepare publisher
ctx = zmq.Context.instance()
pub = ctx.socket(zmq.PUB)
pub.bind("tcp://*:5556")
while True:
# Send current clock (secs) to subscribers
pub.send(str(time.time()))
try:
signal = pipe.recv(zmq.NOBLOCK)
except zmq.ZMQError as e:
if e.errno == zmq.EAGAIN:
# nothing to recv
pass
else:
raise
else:
# received break message
break
time.sleep(1e-3) # 1msec wait
# This main thread simply starts a client, and a server, and then
# waits for the client to signal it's died.
def main():
ctx = zmq.Context.instance()
pub_pipe, pub_peer = zpipe(ctx)
sub_pipe, sub_peer = zpipe(ctx)
pub_thread = threading.Thread(target=publisher, args=(pub_peer,))
pub_thread.daemon=True
pub_thread.start()
sub_thread = threading.Thread(target=subscriber, args=(sub_peer,))
sub_thread.daemon=True
sub_thread.start()
# wait for sub to finish
sub_pipe.recv()
# tell pub to halt
pub_pipe.send("break")
time.sleep(0.1)
if __name__ == '__main__':
main()
C | C++ | Lua | PHP | Tcl | Ada | Basic | C# | Clojure | CL | Erlang | F# | Felix | Go | Haskell | Haxe | Java | Node.js | Objective-C | ooc | Perl | Q | Racket | Ruby | Scala
Notes about this example:
- The message here consists simply of the current system clock as a number of milliseconds. In a realistic application you'd have at least a message header with the timestamp, and a message body with data.
- The example has subscriber and publisher in a single process, as two threads. In reality they would be separate processes. Using threads is just convenient for the demonstration.
A common use-case for pub-sub is distributing large data streams. For example, 'market data' coming from stock exchanges. A typical set-up would have a publisher connected to a stock exchange, taking price quotes, and sending them out to a number of subscribers. If there are a handful of subscribers, we could use TCP. If we have a larger number of subscribers, we'd probably use reliable multicast, i.e. pgm.
Let's imagine our feed has an average of 100,000 100-byte messages a second. That's a typical rate, after filtering market data we don't need to send on to subscribers. Now we decide to record a day's data (maybe 250 GB in 8 hours), and then replay it to a simulation network, i.e. a small group of subscribers. While 100K messages a second is easy for a ØMQ application, we want to replay much faster.
So we set-up our architecture with a bunch of boxes, one for the publisher, and one for each subscriber. These are well-specified boxes, eight cores, twelve for the publisher. (If you're reading this in 2015, which is when the Guide is scheduled to be finished, please add a zero to those numbers.)
And as we pump data into our subscribers, we notice two things:
- When we do even the slightest amount of work with a message, it slows down our subscriber to the point where it can't catch up with the publisher again.
- We're hitting a ceiling, at both publisher and subscriber, to around say 6M messages a second, even after careful optimization and TCP tuning.
The first thing we have to do is break our subscriber into a multithreaded design so that we can do work with messages in one set of threads, while reading messages in another. Typically we don't want to process every message the same way. Rather, the subscriber will filter some messages, perhaps by prefix key. When a message matches some criteria, the subscriber will call a worker to deal with it. In ØMQ terms this means sending the message to a worker thread.
So the subscriber looks something like a queue device. We could use various sockets to connect the subscriber and workers. If we assume one-way traffic, and workers that are all identical, we can use PUSH and PULL, and delegate all the routing work to ØMQ. This is the simplest and fastest approach.
Figure 63 - The Simple Black Box Pattern
The subscriber talks to the publisher over TCP or PGM. The subscriber talks to its workers, which are all in the same process, over inproc.
Now to break that ceiling. What happens is that the subscriber thread hits 100% of CPU, and since it is one thread, it cannot use more than one core. A single thread will always hit a ceiling, be it at 2M, 6M, or more messages per second. We want to split the work across multiple threads that can run in parallel.
The approach used by many high-performance products, which works here, is sharding, meaning we split the work into parallel and independent streams. E.g. half of the topic keys are in one stream, half in another. We could use many streams, but performance won't scale unless we have free cores.
So let's see how to shard into two streams:
Figure 64 - Mad Black Box Pattern
With two streams, working at full speed, we would configure ØMQ as follows:
- Two I/O threads, rather than one.
- Two network interfaces (NIC), one per subscriber.
- Each I/O thread bound to a specific NIC.
- Two subscriber threads, bound to specific cores.
- Two SUB sockets, one per subscriber thread.
- The remaining cores assigned to worker threads.
- Worker threads connected to both subscriber PUSH sockets.
With ideally, no more threads in our architecture than we had cores. Once we create more threads than cores, we get contention between threads, and diminishing returns. There would be no benefit, for example, in creating more I/O threads.
Pub-sub is like a radio broadcast, you miss everything before you join, and then how much information you get depends on the quality of your reception. Surprisingly, for engineers who are used to aiming for "perfection", this model is useful and wide-spread, because it maps perfectly to real-world distribution of information. Think of Facebook and Twitter, the BBC World Service, and the sports results.
However, there are also a whole lot of cases where more reliable pub-sub would be valuable, if we could do it. As we did for request-reply, let's define ''reliability' in terms of what can go wrong. Here are the classic problems with pub-sub:
- Subscribers join late, so miss messages the server already sent.
- Subscriber connections are slow, and can lose messages during that time.
- Subscribers go away, and lose messages while they are away.
Less often, we see problems like these:
- Subscribers can crash, and restart, and lose whatever data they already received.
- Subscribers can fetch messages too slowly, so queues build up and then overflow.
- Networks can become overloaded and drop data (specifically, for PGM).
- Networks can become too slow, so publisher-side queues overflow, and publishers crash.
A lot more can go wrong but these are the typical failures we see in a realistic system.
We've already solved some of these, such as the slow subscriber, which we handle with the Suicidal Snail pattern. But for the rest, it would be nice to have a generic, reusable framework for reliable pub-sub.
The difficulty is that we have no idea what our target applications actually want to do with their data. Do they filter it, and process only a subset of messages? Do they log the data somewhere for later reuse? Do they distribute the data further to workers? There are dozens of plausible scenarios, and each will have its own ideas about what reliability means and how much it's worth in terms of effort and performance.
So we'll build an abstraction that we can implement once, and then reuse for many applications. This abstraction is a shared key-value cache, which stores a set of blobs indexed by unique keys.
Don't confuse this with distributed hash tables, which solve the wider problem of connecting peers in a distributed network, or with distributed key-value tables, which act like non-SQL databases. All we will build is a system that reliably clones some in-memory state from a server to a set of clients. We want to:
- Let a client join the network at any time, and reliably get the current server state.
- Let any client update the key-value cache (inserting new key-value pairs, updating existing ones, or deleting them).
- Reliably propagate changes to all clients, and do this with minimum latency overhead.
- Handle very large numbers of clients, e.g. tens of thousands or more.
The key aspect of the Clone pattern is that clients talk back to servers, which is more than we do in a simple pub-sub dialog. This is why I use the terms 'server' and 'client' instead of 'publisher' and 'subscriber'. We'll use pub-sub as the core of Clone but it is a bit more than that.
We'll develop Clone in stages, solving one problem at a time. First, let's look at how to distribute key-value updates from a server to a set of clients. We'll take our weather server from Chapter One and refactor it to send messages as key-value pairs. We'll modify our client to store these in a hash table.
Figure 65 - Simplest Clone Model
This is the server:
"""
Clone server Model One
"""
import random
import time
import zmq
from kvsimple import KVMsg
def main():
# Prepare our context and publisher socket
ctx = zmq.Context()
publisher = ctx.socket(zmq.PUB)
publisher.bind("tcp://*:5556")
time.sleep(0.2)
sequence = 0
random.seed(time.time())
kvmap = {}
try:
while True:
# Distribute as key-value message
sequence += 1
kvmsg = KVMsg(sequence)
kvmsg.key = "%d" % random.randint(1,10000)
kvmsg.body = "%d" % random.randint(1,1000000)
kvmsg.send(publisher)
kvmsg.store(kvmap)
except KeyboardInterrupt:
print " Interrupted\n%d messages out" % sequence
if __name__ == '__main__':
main()
C | C# | Tcl | Ada | Basic | C++ | Clojure | CL | Erlang | F# | Felix | Go | Haskell | Haxe | Java | Lua | Node.js | Objective-C | ooc | Perl | PHP | Q | Racket | Ruby | Scala
And here is the client:
"""
Clone Client Model One
Author: Min RK <moc.liamg|krnimajneb#moc.liamg|krnimajneb>
"""
import random
import time
import zmq
from kvsimple import KVMsg
def main():
# Prepare our context and publisher socket
ctx = zmq.Context()
updates = ctx.socket(zmq.SUB)
updates.linger = 0
updates.setsockopt(zmq.SUBSCRIBE, '')
updates.connect("tcp://localhost:5556")
kvmap = {}
sequence = 0
while True:
try:
kvmsg = KVMsg.recv(updates)
except:
break # Interrupted
kvmsg.store(kvmap)
sequence += 1
print "Interrupted\n%d messages in" % sequence
if __name__ == '__main__':
main()
C | C# | Tcl | Ada | Basic | C++ | Clojure | CL | Erlang | F# | Felix | Go | Haskell | Haxe | Java | Lua | Node.js | Objective-C | ooc | Perl | PHP | Q | Racket | Ruby | Scala
Some notes about this code:
- All the hard work is done in a kvmsg class. This class works with key-value message objects, which are multi-part ØMQ messages structured as three frames: a key (a ØMQ string), a sequence number (64-bit value, in network byte order), and a binary body (holds everything else).
- The server generates messages with a randomized 4-digit key, which lets us simulate a large but not enormous hash table (10K entries).
- The server does a 200 millisecond pause after binding its socket. This is to prevent "slow joiner syndrome" where the subscriber loses messages as it connects to the server's socket. We'll remove that in later models.
- We'll use the terms 'publisher' and 'subscriber' in the code to refer to sockets. This will help later when we have multiple sockets doing different things.
Here is the kvmsg class, in the simplest form that works for now:
"""
kvsimple - simple key-value message class for example applications
Author: Min RK <moc.liamg|krnimajneb#moc.liamg|krnimajneb>
"""
import struct # for packing integers
import sys
import zmq
class KVMsg(object):
"""
Message is formatted on wire as 3 frames:
frame 0: key (0MQ string)
frame 1: sequence (8 bytes, network order)
frame 2: body (blob)
"""
key = None # key (string)
sequence = 0 # int
body = None # blob
def __init__(self, sequence, key=None, body=None):
assert isinstance(sequence, int)
self.sequence = sequence
self.key = key
self.body = body
def store(self, dikt):
"""Store me in a dict if I have anything to store"""
# this seems weird to check, but it's what the C example does
if self.key is not None and self.body is not None:
dikt[self.key] = self
def send(self, socket):
"""Send key-value message to socket; any empty frames are sent as such."""
key = '' if self.key is None else self.key
seq_s = struct.pack('!l', self.sequence)
body = '' if self.body is None else self.body
socket.send_multipart([ key, seq_s, body ])
@classmethod
def recv(cls, socket):
"""Reads key-value message from socket, returns new kvmsg instance."""
key, seq_s, body = socket.recv_multipart()
key = key if key else None
seq = struct.unpack('!l',seq_s)[0]
body = body if body else None
return cls(seq, key=key, body=body)
def dump(self):
if self.body is None:
size = 0
data='NULL'
else:
size = len(self.body)
data=repr(self.body)
print >> sys.stderr, "[seq:{seq}][key:{key}][size:{size}] {data}".format(
seq=self.sequence,
key=self.key,
size=size,
data=data,
)
# ---------------------------------------------------------------------
# Runs self test of class
def test_kvmsg (verbose):
print " * kvmsg: ",
# Prepare our context and sockets
ctx = zmq.Context()
output = ctx.socket(zmq.DEALER)
output.bind("ipc://kvmsg_selftest.ipc")
input = ctx.socket(zmq.DEALER)
input.connect("ipc://kvmsg_selftest.ipc")
kvmap = {}
# Test send and receive of simple message
kvmsg = KVMsg(1)
kvmsg.key = "key"
kvmsg.body = "body"
if verbose:
kvmsg.dump()
kvmsg.send(output)
kvmsg.store(kvmap)
kvmsg2 = KVMsg.recv(input)
if verbose:
kvmsg2.dump()
assert kvmsg2.key == "key"
kvmsg2.store(kvmap)
assert len(kvmap) == 1 # shouldn't be different
print "OK"
if __name__ == '__main__':
test_kvmsg('-v' in sys.argv)
C | C# | Tcl | Ada | Basic | C++ | Clojure | CL | Erlang | F# | Felix | Go | Haskell | Haxe | Java | Lua | Node.js | Objective-C | ooc | Perl | PHP | Q | Racket | Ruby | Scala
We'll make a more sophisticated kvmsg class later, for using in real applications.
Both the server and client maintain hash tables, but this first model only works properly if we start all clients before the server, and the clients never crash. That's not 'reliability'.
In order to allow a late (or recovering) client to catch up with a server it has to get a snapshot of the server's state. Just as we've reduced "message" to mean "a sequenced key-value pair", we can reduce "state" to mean "a hash table". To get the server state, a client opens a REQ socket and asks for it explicitly.
Figure 66 - State Replication
To make this work, we have to solve the timing problem. Getting a state snapshot will take a certain time, possibly fairly long if the snapshot is large. We need to correctly apply updates to the snapshot. But the server won't know when to start sending us updates. One way would be to start subscribing, get a first update, and then ask for "state for update N". This would require the server storing one snapshot for each update, which isn't practical.
So we will do the synchronization in the client, as follows:
- The client first subscribes to updates and then makes a state request. This guarantees that the state is going to be newer than the oldest update it has.
- The client waits for the server to reply with state, and meanwhile queues all updates. It does this simply by not reading them: ØMQ keeps them queued on the socket queue, since we don't set a HWM.
- When the client receives its state update, it begins once again to read updates. However it discards any updates that are older than the state update. So if the state update includes updates up to 200, the client will discard updates up to 201.
- The client then applies updates to its own state snapshot.
It's a simple model that exploits ØMQ's own internal queues. Here's the server:
"""
Clone server Model Two
Author: Min RK <moc.liamg|krnimajneb#moc.liamg|krnimajneb>
"""
import random
import threading
import time
import zmq
from kvsimple import KVMsg
from zhelpers import zpipe
def main():
# Prepare our context and publisher socket
ctx = zmq.Context()
publisher = ctx.socket(zmq.PUB)
publisher.bind("tcp://*:5557")
updates, peer = zpipe(ctx)
manager_thread = threading.Thread(target=state_manager, args=(ctx,peer))
manager_thread.daemon=True
manager_thread.start()
sequence = 0
random.seed(time.time())
try:
while True:
# Distribute as key-value message
sequence += 1
kvmsg = KVMsg(sequence)
kvmsg.key = "%d" % random.randint(1,10000)
kvmsg.body = "%d" % random.randint(1,1000000)
kvmsg.send(publisher)
kvmsg.send(updates)
except KeyboardInterrupt:
print " Interrupted\n%d messages out" % sequence
# simple struct for routing information for a key-value snapshot
class Route:
def __init__(self, socket, identity):
self.socket = socket # ROUTER socket to send to
self.identity = identity # Identity of peer who requested state
def send_single(key, kvmsg, route):
"""Send one state snapshot key-value pair to a socket
Hash item data is our kvmsg object, ready to send
"""
# Send identity of recipient first
route.socket.send(route.identity, zmq.SNDMORE)
kvmsg.send(route.socket)
def state_manager(ctx, pipe):
"""This thread maintains the state and handles requests from clients for snapshots.
"""
kvmap = {}
pipe.send("READY")
snapshot = ctx.socket(zmq.ROUTER)
snapshot.bind("tcp://*:5556")
poller = zmq.Poller()
poller.register(pipe, zmq.POLLIN)
poller.register(snapshot, zmq.POLLIN)
sequence = 0 # Current snapshot version number
while True:
try:
items = dict(poller.poll())
except (zmq.ZMQError, KeyboardInterrupt):
break # interrupt/context shutdown
# Apply state update from main thread
if pipe in items:
kvmsg = KVMsg.recv(pipe)
sequence = kvmsg.sequence
kvmsg.store(kvmap)
# Execute state snapshot request
if snapshot in items:
msg = snapshot.recv_multipart()
identity = msg[0]
request = msg[1]
if request == "ICANHAZ?":
pass
else:
print "E: bad request, aborting\n",
break
# Send state snapshot to client
route = Route(snapshot, identity)
# For each entry in kvmap, send kvmsg to client
for k,v in kvmap.items():
send_single(k,v,route)
# Now send END message with sequence number
print "Sending state shapshot=%d\n" % sequence,
snapshot.send(identity, zmq.SNDMORE)
kvmsg = KVMsg(sequence)
kvmsg.key = "KTHXBAI"
kvmsg.body = ""
kvmsg.send(snapshot)
if __name__ == '__main__':
main()
C | Tcl | Ada | Basic | C++ | C# | Clojure | CL | Erlang | F# | Felix | Go | Haskell | Haxe | Java | Lua | Node.js | Objective-C | ooc | Perl | PHP | Q | Racket | Ruby | Scala
And here is the client:
"""
Clone client Model Two
Author: Min RK <moc.liamg|krnimajneb#moc.liamg|krnimajneb>
"""
import time
import zmq
from kvsimple import KVMsg
def main():
# Prepare our context and subscriber
ctx = zmq.Context()
snapshot = ctx.socket(zmq.DEALER)
snapshot.linger = 0
snapshot.connect("tcp://localhost:5556")
subscriber = ctx.socket(zmq.SUB)
subscriber.linger = 0
subscriber.setsockopt(zmq.SUBSCRIBE, '')
subscriber.connect("tcp://localhost:5557")
kvmap = {}
# Get state snapshot
sequence = 0
snapshot.send("ICANHAZ?")
while True:
try:
kvmsg = KVMsg.recv(snapshot)
except:
break; # Interrupted
if kvmsg.key == "KTHXBAI":
sequence = kvmsg.sequence
print "Received snapshot=%d" % sequence
break # Done
kvmsg.store(kvmap)
# Now apply pending updates, discard out-of-sequence messages
while True:
try:
kvmsg = KVMsg.recv(subscriber)
except:
break # Interrupted
if kvmsg.sequence > sequence:
sequence = kvmsg.sequence
kvmsg.store(kvmap)
if __name__ == '__main__':
main()
C | Tcl | Ada | Basic | C++ | C# | Clojure | CL | Erlang | F# | Felix | Go | Haskell | Haxe | Java | Lua | Node.js | Objective-C | ooc | Perl | PHP | Q | Racket | Ruby | Scala
Some notes about this code:
- The server uses two threads, for simpler design. One thread produces random updates, and the second thread handles state. The two communicate across PAIR sockets. You might like to use SUB sockets but you'd hit the "slow joiner" problem where the subscriber would randomly miss some messages while connecting. PAIR sockets let us explicitly synchronize the two threads.
- We set a HWM on the updates socket pair, since hash table insertions are relatively slow. Without this, the server runs out of memory. On inproc connections, the real HWM is the sum of the HWM of both sockets, so we set the HWM on each socket.
- The client is really simple. In C, under 60 lines of code. A lot of the heavy lifting is done in the kvmsg class, but still, the basic Clone pattern is easier to implement than it seemed at first.
- We don't use anything fancy for serializing the state. The hash table holds a set of kvmsg objects, and the server sends these, as a batch of messages, to the client requesting state. If multiple clients request state at once, each will get a different snapshot.
- We assume that the client has exactly one server to talk to. The server must be running; we do not try to solve the question of what happens if the server crashes.
Right now, these two programs don't do anything real, but they correctly synchronize state. It's a neat example of how to mix different patterns: PAIR-over-inproc, PUB-SUB, and ROUTER-DEALER.
In our second model, changes to the key-value cache came from the server itself. This is a centralized model, useful for example if we have a central configuration file we want to distribute, with local caching on each node. A more interesting model takes updates from clients, not the server. The server thus becomes a stateless broker. This gives us some benefits:
- We're less worried about the reliability of the server. If it crashes, we can start a new instance, and feed it new values.
- We can use the key-value cache to share knowledge between dynamic peers.
Updates from clients go via a PUSH-PULL socket flow from client to server.
Figure 67 - Republishing Updates
Why don't we allow clients to publish updates directly to other clients? While this would reduce latency, it makes it impossible to assign ascending unique sequence numbers to messages. The server can do this. There's a more subtle second reason. In many applications it's important that updates have a single order, across many clients. Forcing all updates through the server ensures that they have the same order when they finally get to clients.
With unique sequencing, clients can detect the nastier failures - network congestion and queue overflow. If a client discovers that its incoming message stream has a hole, it can take action. It seems sensible that the client contact the server and ask for the missing messages, but in practice that isn't useful. If there are holes, they're caused by network stress, and adding more stress to the network will make things worse. All the client can really do is warn its users "Unable to continue", and stop, and not restart until someone has manually checked the cause of the problem.
We'll now generate state updates in the client. Here's the server:
"""
Clone server Model Three
Author: Min RK <moc.liamg|krnimajneb#moc.liamg|krnimajneb
"""
import zmq
from kvsimple import KVMsg
# simple struct for routing information for a key-value snapshot
class Route:
def __init__(self, socket, identity):
self.socket = socket # ROUTER socket to send to
self.identity = identity # Identity of peer who requested state
def send_single(key, kvmsg, route):
"""Send one state snapshot key-value pair to a socket"""
# Send identity of recipient first
route.socket.send(route.identity, zmq.SNDMORE)
kvmsg.send(route.socket)
def main():
# context and sockets
ctx = zmq.Context()
snapshot = ctx.socket(zmq.ROUTER)
snapshot.bind("tcp://*:5556")
publisher = ctx.socket(zmq.PUB)
publisher.bind("tcp://*:5557")
collector = ctx.socket(zmq.PULL)
collector.bind("tcp://*:5558")
sequence = 0
kvmap = {}
poller = zmq.Poller()
poller.register(collector, zmq.POLLIN)
poller.register(snapshot, zmq.POLLIN)
while True:
try:
items = dict(poller.poll(1000))
except:
break # Interrupted
# Apply state update sent from client
if collector in items:
kvmsg = KVMsg.recv(collector)
sequence += 1
kvmsg.sequence = sequence
kvmsg.send(publisher)
kvmsg.store(kvmap)
print "I: publishing update %5d" % sequence
# Execute state snapshot request
if snapshot in items:
msg = snapshot.recv_multipart()
identity = msg[0]
request = msg[1]
if request == "ICANHAZ?":
pass
else:
print "E: bad request, aborting\n",
break
# Send state snapshot to client
route = Route(snapshot, identity)
# For each entry in kvmap, send kvmsg to client
for k,v in kvmap.items():
send_single(k,v,route)
# Now send END message with sequence number
print "Sending state shapshot=%d\n" % sequence,
snapshot.send(identity, zmq.SNDMORE)
kvmsg = KVMsg(sequence)
kvmsg.key = "KTHXBAI"
kvmsg.body = ""
kvmsg.send(snapshot)
print " Interrupted\n%d messages handled" % sequence
if __name__ == '__main__':
main()
C | Tcl | Ada | Basic | C++ | C# | Clojure | CL | Erlang | F# | Felix | Go | Haskell | Haxe | Java | Lua | Node.js | Objective-C | ooc | Perl | PHP | Q | Racket | Ruby | Scala
And here is the client:
"""
Clone client Model Three
Author: Min RK <moc.liamg|krnimajneb#moc.liamg|krnimajneb
"""
import random
import time
import zmq
from kvsimple import KVMsg
def main():
# Prepare our context and subscriber
ctx = zmq.Context()
snapshot = ctx.socket(zmq.DEALER)
snapshot.linger = 0
snapshot.connect("tcp://localhost:5556")
subscriber = ctx.socket(zmq.SUB)
subscriber.linger = 0
subscriber.setsockopt(zmq.SUBSCRIBE, '')
subscriber.connect("tcp://localhost:5557")
publisher = ctx.socket(zmq.PUSH)
publisher.linger = 0
publisher.connect("tcp://localhost:5558")
random.seed(time.time())
kvmap = {}
# Get state snapshot
sequence = 0
snapshot.send("ICANHAZ?")
while True:
try:
kvmsg = KVMsg.recv(snapshot)
except:
return # Interrupted
if kvmsg.key == "KTHXBAI":
sequence = kvmsg.sequence
print "I: Received snapshot=%d" % sequence
break # Done
kvmsg.store(kvmap)
poller = zmq.Poller()
poller.register(subscriber, zmq.POLLIN)
alarm = time.time()+1.
while True:
tickless = 1000*max(0, alarm - time.time())
try:
items = dict(poller.poll(tickless))
except:
break # Interrupted
if subscriber in items:
kvmsg = KVMsg.recv(subscriber)
# Discard out-of-sequence kvmsgs, incl. heartbeats
if kvmsg.sequence > sequence:
sequence = kvmsg.sequence
kvmsg.store(kvmap)
print "I: received update=%d" % sequence
# If we timed-out, generate a random kvmsg
if time.time() >= alarm:
kvmsg = KVMsg(0)
kvmsg.key = "%d" % random.randint(1,10000)
kvmsg.body = "%d" % random.randint(1,1000000)
kvmsg.send(publisher)
kvmsg.store(kvmap)
alarm = time.time() + 1.
print " Interrupted\n%d messages in" % sequence
if __name__ == '__main__':
main()
C | Tcl | Ada | Basic | C++ | C# | Clojure | CL | Erlang | F# | Felix | Go | Haskell | Haxe | Java | Lua | Node.js | Objective-C | ooc | Perl | PHP | Q | Racket | Ruby | Scala
Some notes about this code:
- The server has collapsed to a single task. It manages a PULL socket for incoming updates, a ROUTER socket for state requests, and a PUB socket for outgoing updates.
- The client uses a simple tickless timer to send a random update to the server once a second. In a real implementation we would drive updates from application code.
A realistic key-value cache will get large, and clients will usually be interested only in parts of the cache. Working with a subtree is fairly simple. The client has to tell the server the subtree when it makes a state request, and it has to specify the same subtree when it subscribes to updates.
There are a couple of common syntaxes for trees. One is the "path hierarchy", and another is the "topic tree". These look like:
- Path hierarchy: "/some/list/of/paths"
- Topic tree: "some.list.of.topics"
We'll use the path hierarchy, and extend our client and server so that a client can work with a single subtree. Working with multiple subtrees is not much more difficult, we won't do that here but it's a trivial extension.
Here's the server, a small variation on Model Three:
"""
Clone server Model Four
Author: Min RK <moc.liamg|krnimajneb#moc.liamg|krnimajneb
"""
import zmq
from kvsimple import KVMsg
# simple struct for routing information for a key-value snapshot
class Route:
def __init__(self, socket, identity, subtree):
self.socket = socket # ROUTER socket to send to
self.identity = identity # Identity of peer who requested state
self.subtree = subtree # Client subtree specification
def send_single(key, kvmsg, route):
"""Send one state snapshot key-value pair to a socket"""
# check front of key against subscription subtree:
if kvmsg.key.startswith(route.subtree):
# Send identity of recipient first
route.socket.send(route.identity, zmq.SNDMORE)
kvmsg.send(route.socket)
def main():
# context and sockets
ctx = zmq.Context()
snapshot = ctx.socket(zmq.ROUTER)
snapshot.bind("tcp://*:5556")
publisher = ctx.socket(zmq.PUB)
publisher.bind("tcp://*:5557")
collector = ctx.socket(zmq.PULL)
collector.bind("tcp://*:5558")
sequence = 0
kvmap = {}
poller = zmq.Poller()
poller.register(collector, zmq.POLLIN)
poller.register(snapshot, zmq.POLLIN)
while True:
try:
items = dict(poller.poll(1000))
except:
break # Interrupted
# Apply state update sent from client
if collector in items:
kvmsg = KVMsg.recv(collector)
sequence += 1
kvmsg.sequence = sequence
kvmsg.send(publisher)
kvmsg.store(kvmap)
print "I: publishing update %5d" % sequence
# Execute state snapshot request
if snapshot in items:
msg = snapshot.recv_multipart()
identity, request, subtree = msg
if request == "ICANHAZ?":
pass
else:
print "E: bad request, aborting\n",
break
# Send state snapshot to client
route = Route(snapshot, identity, subtree)
# For each entry in kvmap, send kvmsg to client
for k,v in kvmap.items():
send_single(k,v,route)
# Now send END message with sequence number
print "Sending state shapshot=%d\n" % sequence,
snapshot.send(identity, zmq.SNDMORE)
kvmsg = KVMsg(sequence)
kvmsg.key = "KTHXBAI"
kvmsg.body = subtree
kvmsg.send(snapshot)
print " Interrupted\n%d messages handled" % sequence
if __name__ == '__main__':
main()
C | Tcl | Ada | Basic | C++ | C# | Clojure | CL | Erlang | F# | Felix | Go | Haskell | Haxe | Java | Lua | Node.js | Objective-C | ooc | Perl | PHP | Q | Racket | Ruby | Scala
And here is the client:
"""
Clone client Model Four
Author: Min RK <moc.liamg|krnimajneb#moc.liamg|krnimajneb
"""
import random
import time
import zmq
from kvsimple import KVMsg
SUBTREE = "/client/"
def main():
# Prepare our context and subscriber
ctx = zmq.Context()
snapshot = ctx.socket(zmq.DEALER)
snapshot.linger = 0
snapshot.connect("tcp://localhost:5556")
subscriber = ctx.socket(zmq.SUB)
subscriber.linger = 0
subscriber.setsockopt(zmq.SUBSCRIBE, SUBTREE)
subscriber.connect("tcp://localhost:5557")
publisher = ctx.socket(zmq.PUSH)
publisher.linger = 0
publisher.connect("tcp://localhost:5558")
random.seed(time.time())
kvmap = {}
# Get state snapshot
sequence = 0
snapshot.send_multipart(["ICANHAZ?", SUBTREE])
while True:
try:
kvmsg = KVMsg.recv(snapshot)
except:
raise
return # Interrupted
if kvmsg.key == "KTHXBAI":
sequence = kvmsg.sequence
print "I: Received snapshot=%d" % sequence
break # Done
kvmsg.store(kvmap)
poller = zmq.Poller()
poller.register(subscriber, zmq.POLLIN)
alarm = time.time()+1.
while True:
tickless = 1000*max(0, alarm - time.time())
try:
items = dict(poller.poll(tickless))
except:
break # Interrupted
if subscriber in items:
kvmsg = KVMsg.recv(subscriber)
# Discard out-of-sequence kvmsgs, incl. heartbeats
if kvmsg.sequence > sequence:
sequence = kvmsg.sequence
kvmsg.store(kvmap)
print "I: received update=%d" % sequence
# If we timed-out, generate a random kvmsg
if time.time() >= alarm:
kvmsg = KVMsg(0)
kvmsg.key = SUBTREE + "%d" % random.randint(1,10000)
kvmsg.body = "%d" % random.randint(1,1000000)
kvmsg.send(publisher)
kvmsg.store(kvmap)
alarm = time.time() + 1.
print " Interrupted\n%d messages in" % sequence
if __name__ == '__main__':
main()
C | Tcl | Ada | Basic | C++ | C# | Clojure | CL | Erlang | F# | Felix | Go | Haskell | Haxe | Java | Lua | Node.js | Objective-C | ooc | Perl | PHP | Q | Racket | Ruby | Scala
An ephemeral value is one that expires dynamically. If you think of Clone being used for a DNS-like service, then ephemeral values would let you do dynamic DNS. A node joins the network, publishes its address, and refreshes this regularly. If the node dies, its address eventually gets removed.
The usual abstraction for ephemeral values is to attach them to a "session", and delete them when the session ends. In Clone, sessions would be defined by clients, and would end if the client died.
The simpler alternative to using sessions is to define every ephemeral value with a "time to live" that tells the server when to expire the value. Clients then refresh values, and if they don't, the values expire.
I'm going to implement that simpler model because we don't know yet that it's worth making a more complex one. The difference is really in performance. If clients have a handful of ephemeral values, it's fine to set a TTL on each one. If clients use masses of ephemeral values, it's more efficient to attach them to sessions, and expire them in bulk.
First off, we need a way to encode the TTL in the key-value message. We could add a frame. The problem with using frames for properties is that each time we want to add a new property, we have to change the structure of our kvmsg class. It breaks compatibility. So let's add a 'properties' frame to the message, and code to let us get and put property values.
Next, we need a way to say, "delete this value". Up to now servers and clients have always blindly inserted or updated new values into their hash table. We'll say that if the value is empty, that means "delete this key".
Here's a more complete version of the kvmsg class, which implements a 'properties' frame (and adds a UUID frame, which we'll need later on). It also handles empty values by deleting the key from the hash, if necessary:
"""
kvmsg - key-value message class for example applications
Author: Min RK <moc.liamg|krnimajneb#moc.liamg|krnimajneb>
"""
import struct # for packing integers
import sys
from uuid import uuid4
import zmq
# zmq.jsonapi ensures bytes, instead of unicode:
import zmq.utils.jsonapi as json
class KVMsg(object):
"""
Message is formatted on wire as 5 frames:
frame 0: key (0MQ string)
frame 1: sequence (8 bytes, network order)
frame 2: uuid (blob, 16 bytes)
frame 3: properties (0MQ string)
frame 4: body (blob)
"""
key = None
sequence = 0
uuid=None
properties = None
body = None
def __init__(self, sequence, uuid=None, key=None, properties=None, body=None):
assert isinstance(sequence, int)
self.sequence = sequence
if uuid is None:
uuid = uuid4().bytes
self.uuid = uuid
self.key = key
self.properties = {} if properties is None else properties
self.body = body
# dictionary access maps to properties:
def __getitem__(self, k):
return self.properties[k]
def __setitem__(self, k, v):
self.properties[k] = v
def get(self, k, default=None):
return self.properties.get(k, default)
def store(self, dikt):
"""Store me in a dict if I have anything to store"""
# this seems weird to check, but it's what the C example does
if self.key is not None and self.body is not None:
dikt[self.key] = self
def send(self, socket):
"""Send key-value message to socket; any empty frames are sent as such."""
key = '' if self.key is None else self.key
seq_s = struct.pack('!q', self.sequence)
body = '' if self.body is None else self.body
prop_s = json.dumps(self.properties)
socket.send_multipart([ key, seq_s, self.uuid, prop_s, body ])
@classmethod
def recv(cls, socket):
"""Reads key-value message from socket, returns new kvmsg instance."""
return cls.from_msg(socket.recv_multipart())
@classmethod
def from_msg(cls, msg):
"""Construct key-value message from a multipart message"""
key, seq_s, uuid, prop_s, body = msg
key = key if key else None
seq = struct.unpack('!q',seq_s)[0]
body = body if body else None
prop = json.loads(prop_s)
return cls(seq, uuid=uuid, key=key, properties=prop, body=body)
def dump(self):
if self.body is None:
size = 0
data='NULL'
else:
size = len(self.body)
data=repr(self.body)
print >> sys.stderr, "[seq:{seq}][key:{key}][size:{size}] {props} {data}".format(
seq=self.sequence,
# uuid=hexlify(self.uuid),
key=self.key,
size=size,
props=json.dumps(self.properties),
data=data,
)
# ---------------------------------------------------------------------
# Runs self test of class
def test_kvmsg (verbose):
print " * kvmsg: ",
# Prepare our context and sockets
ctx = zmq.Context()
output = ctx.socket(zmq.DEALER)
output.bind("ipc://kvmsg_selftest.ipc")
input = ctx.socket(zmq.DEALER)
input.connect("ipc://kvmsg_selftest.ipc")
kvmap = {}
# Test send and receive of simple message
kvmsg = KVMsg(1)
kvmsg.key = "key"
kvmsg.body = "body"
if verbose:
kvmsg.dump()
kvmsg.send(output)
kvmsg.store(kvmap)
kvmsg2 = KVMsg.recv(input)
if verbose:
kvmsg2.dump()
assert kvmsg2.key == "key"
kvmsg2.store(kvmap)
assert len(kvmap) == 1 # shouldn't be different
# test send/recv with properties:
kvmsg = KVMsg(2, key="key", body="body")
kvmsg["prop1"] = "value1"
kvmsg["prop2"] = "value2"
kvmsg["prop3"] = "value3"
assert kvmsg["prop1"] == "value1"
if verbose:
kvmsg.dump()
kvmsg.send(output)
kvmsg2 = KVMsg.recv(input)
if verbose:
kvmsg2.dump()
# ensure properties were preserved
assert kvmsg2.key == kvmsg.key
assert kvmsg2.body == kvmsg.body
assert kvmsg2.properties == kvmsg.properties
assert kvmsg2["prop2"] == kvmsg["prop2"]
print "OK"
if __name__ == '__main__':
test_kvmsg('-v' in sys.argv)
C | Tcl | Ada | Basic | C++ | C# | Clojure | CL | Erlang | F# | Felix | Go | Haskell | Haxe | Java | Lua | Node.js | Objective-C | ooc | Perl | PHP | Q | Racket | Ruby | Scala
The Model Five client is almost identical to Model Four. Diff is your friend. It uses the full kvmsg class instead of kvsimple, and sets a randomized 'ttl' property (measured in seconds) on each message:
kvmsg_set_prop (kvmsg, "ttl", "%d", randof (30));
The Model Five server has totally changed. Instead of a poll loop, we're now using a reactor. This just makes it simpler to mix timers and socket events. Unfortunately in C the reactor style is more verbose. Your mileage will vary in other languages. But reactors seem to be a better way of building more complex ØMQ applications. Here's the server:
"""
Clone server Model Five
Author: Min RK <moc.liamg|krnimajneb#moc.liamg|krnimajneb
"""
import logging
import time
import zmq
from zmq.eventloop.ioloop import IOLoop, PeriodicCallback
from zmq.eventloop.zmqstream import ZMQStream
from kvmsg import KVMsg
from zhelpers import dump
# simple struct for routing information for a key-value snapshot
class Route:
def __init__(self, socket, identity, subtree):
self.socket = socket # ROUTER socket to send to
self.identity = identity # Identity of peer who requested state
self.subtree = subtree # Client subtree specification
def send_single(key, kvmsg, route):
"""Send one state snapshot key-value pair to a socket"""
# check front of key against subscription subtree:
if kvmsg.key.startswith(route.subtree):
# Send identity of recipient first
route.socket.send(route.identity, zmq.SNDMORE)
kvmsg.send(route.socket)
class CloneServer(object):
# Our server is defined by these properties
ctx = None # Context wrapper
kvmap = None # Key-value store
loop = None # IOLoop reactor
port = None # Main port we're working on
sequence = 0 # How many updates we're at
snapshot = None # Handle snapshot requests
publisher = None # Publish updates to clients
collector = None # Collect updates from clients
def __init__(self, port=5556):
self.port = port
self.ctx = zmq.Context()
self.kvmap = {}
self.loop = IOLoop.instance()
# Set up our clone server sockets
self.snapshot = self.ctx.socket(zmq.ROUTER)
self.publisher = self.ctx.socket(zmq.PUB)
self.collector = self.ctx.socket(zmq.PULL)
self.snapshot.bind("tcp://*:%d" % self.port)
self.publisher.bind("tcp://*:%d" % (self.port + 1))
self.collector.bind("tcp://*:%d" % (self.port + 2))
# Wrap sockets in ZMQStreams for IOLoop handlers
self.snapshot = ZMQStream(self.snapshot)
self.publisher = ZMQStream(self.publisher)
self.collector = ZMQStream(self.collector)
# Register our handlers with reactor
self.snapshot.on_recv(self.handle_snapshot)
self.collector.on_recv(self.handle_collect)
self.flush_callback = PeriodicCallback(self.flush_ttl, 1000)
# basic log formatting:
logging.basicConfig(format="%(asctime)s %(message)s", datefmt="%Y-%m-%d %H:%M:%S",
level=logging.INFO)
def start(self):
# Run reactor until process interrupted
self.flush_callback.start()
try:
self.loop.start()
except KeyboardInterrupt:
pass
def handle_snapshot(self, msg):
"""snapshot requests"""
if len(msg) != 3 or msg[1] != "ICANHAZ?":
print "E: bad request, aborting"
dump(msg)
self.loop.stop()
return
identity, request, subtree = msg
if subtree:
# Send state snapshot to client
route = Route(self.snapshot, identity, subtree)
# For each entry in kvmap, send kvmsg to client
for k,v in self.kvmap.items():
send_single(k,v,route)
# Now send END message with sequence number
logging.info("I: Sending state shapshot=%d" % self.sequence)
self.snapshot.send(identity, zmq.SNDMORE)
kvmsg = KVMsg(self.sequence)
kvmsg.key = "KTHXBAI"
kvmsg.body = subtree
kvmsg.send(self.snapshot)
def handle_collect(self, msg):
"""Collect updates from clients"""
kvmsg = KVMsg.from_msg(msg)
self.sequence += 1
kvmsg.sequence = self.sequence
kvmsg.send(self.publisher)
ttl = kvmsg.get('ttl')
if ttl is not None:
kvmsg['ttl'] = time.time() + ttl
kvmsg.store(self.kvmap)
logging.info("I: publishing update=%d", self.sequence)
def flush_ttl(self):
"""Purge ephemeral values that have expired"""
for key,kvmsg in self.kvmap.items():
self.flush_single(kvmsg)
def flush_single(self, kvmsg):
"""If key-value pair has expired, delete it and publish the fact
to listening clients."""
if kvmsg.get('ttl', 0) <= time.time():
kvmsg.body = ""
self.sequence += 1
kvmsg.sequence = self.sequence
kvmsg.send(self.publisher)
del self.kvmap[kvmsg.key]
logging.info("I: publishing delete=%d", self.sequence)
def main():
clone = CloneServer()
clone.start()
if __name__ == '__main__':
main()
C | Tcl | Ada | Basic | C++ | C# | Clojure | CL | Erlang | F# | Felix | Go | Haskell | Haxe | Java | Lua | Node.js | Objective-C | ooc | Perl | PHP | Q | Racket | Ruby | Scala
Clone models one to five are relatively simple. We're now going to get into unpleasantly complex territory here that has me getting up for another espresso. You should appreciate that making "reliable" messaging is complex enough that you always need to ask, "do we actually need this?" before jumping into it. If you can get away with unreliable, or "good enough" reliability, you can make a huge win in terms of cost and complexity. Sure, you may lose some data now and then. It is often a good trade-off. Having said, that, and (sips) since the espresso is really good, let's jump in!
As you play with model three, you'll stop and restart the server. It might look like it recovers, but of course it's applying updates to an empty state, instead of the proper current state. Any new client joining the network will get just the latest updates, instead of all of them. So let's work out a design for making Clone work despite server failures.
Let's list the failures we want to be able to handle:
- Clone server process crashes and is automatically or manually restarted. The process loses its state and has to get it back from somewhere.
- Clone server machine dies and is off-line for a significant time. Clients have to switch to an alternate server somewhere.
- Clone server process or machine gets disconnected from the network, e.g. a switch dies. It may come back at some point, but in the meantime clients need an alternate server.
Our first step is to add a second server. We can use the Binary Star pattern from Chapter four to organize these into primary and backup. Binary Star is a reactor, so it's useful that we already refactored the last server model into a reactor style.
We need to ensure that updates are not lost if the primary server crashes. The simplest technique is to send them to both servers.
The backup server can then act as a client, and keep its state synchronized by receiving updates as all clients do. It'll also get new updates from clients. It can't yet store these in its hash table, but it can hold onto them for a while.
So, Model Six introduces these changes over Model Five:
- We use a pub-sub flow instead of a push-pull flow for client updates (to the servers). The reasons: push sockets will block if there is no recipient, and they round-robin, so we'd need to open two of them. We'll bind the servers' SUB sockets and connect the clients' PUB sockets to them. This takes care of fanning out from one client to two servers.
- We add heartbeats to server updates (to clients), so that a client can detect when the primary server has died. It can then switch over to the backup server.
- We connect the two servers using the Binary Star bstar reactor class. Binary Star relies on the clients to 'vote' by making an explicit request to the server they consider "master". We'll use snapshot requests for this.
- We make all update messages uniquely identifiable by adding a UUID field. The client generates this, and the server propagates it back on re-published updates.
- The slave server keeps a "pending list" of updates that it has received from clients, but not yet from the master server. Or, updates it's received from the master, but not yet clients. The list is ordered from oldest to newest, so that it is easy to remove updates off the head.
It's useful to design the client logic as a finite state machine. The client cycles through three states:
- The client opens and connects its sockets, and then requests a snapshot from the first server. To avoid request storms, it will ask any given server only twice. One request might get lost, that'd be bad luck. Two getting lost would be carelessness.
- The client waits for a reply (snapshot data) from the current server, and if it gets it, it stores it. If there is no reply within some timeout, it fails over to the next server.
- When the client has gotten its snapshot, it waits for and processes updates. Again, if it doesn't hear anything from the server within some timeout, it fails over to the next server.
The client loops forever. It's quite likely during startup or fail-over that some clients may be trying to talk to the primary server while others are trying to talk to the backup server. The Binary Star state machine handles this, hopefully accurately. (One of the joys of making designs like this is we cannot prove they are right, we can only prove them wrong. So it's like a guy falling off a tall building. So far, so good… so far, so good…)
Figure 68 - Clone Client Finite State Machine
Fail-over happens as follows:
- The client detects that primary server is no longer sending heartbeats, so has died. The client connects to the backup server and requests a new state snapshot.
- The backup server starts to receive snapshot requests from clients, and detects that primary server has gone, so takes over as primary.
- The backup server applies its pending list to its own hash table, and then starts to process state snapshot requests.
When the primary server comes back on-line, it will:
- Start up as slave server, and connect to the backup server as a Clone client.
- Start to receive updates from clients, via its SUB socket.
We make some assumptions:
- That at least one server will keep running. If both servers crash, we lose all server state and there's no way to recover it.
- That multiple clients do not update the same hash keys, at the same time. Client updates will arrive at the two servers in a different order. So, the backup server may apply updates from its pending list in a different order than the primary server would or did. Updates from one client will always arrive in the same order on both servers, so that is safe.
So the architecture for our high-availability server pair using the Binary Star pattern has two servers and a set of clients that talk to both servers.
Figure 69 - High-availability Clone Server Pair
As a first step to building this, we're going to refactor the client as a reusable class. This is partly for fun (writing asynchronous classes with ØMQ is like an exercise in elegance), but mainly because we want Clone to be really easy to plug-in to random applications. Since resilience depends on clients behaving correctly, it's much easier to guarantee this when there's a reusable client API. When we start to handle fail-over in clients, it does get a little complex (imagine mixing a Freelance client with a Clone client). So, reusability ahoy!
My usual design approach is to first design an API that feels right, then to implement that. So, we start by taking the clone client, and rewriting it to sit on top of some presumed class API called clone. Turning random code into an API means defining a reasonably stable and abstract contract with applications. For example, in Model Five, the client opened three separate sockets to the server, using endpoints that were hard-coded in the source. We could make an API with three methods, like this:
// Specify endpoints for each socket we need
clone_subscribe (clone, "tcp://localhost:5556");
clone_snapshot (clone, "tcp://localhost:5557");
clone_updates (clone, "tcp://localhost:5558");
// Times two, since we have two servers
clone_subscribe (clone, "tcp://localhost:5566");
clone_snapshot (clone, "tcp://localhost:5567");
clone_updates (clone, "tcp://localhost:5568");
But this is both verbose and fragile. It's not a good idea to expose the internals of a design to applications. Today, we use three sockets. Tomorrow, two, or four. Do we really want to go and change every application that uses the clone class? So to hide these sausage factory details, we make a small abstraction, like this:
// Specify primary and backup servers
clone_connect (clone, "tcp://localhost:5551");
clone_connect (clone, "tcp://localhost:5561");
Which has the advantage of simplicity (one server sits at one endpoint) but has an impact on our internal design. We now need to somehow turn that single endpoint into three endpoints. One way would be to bake the knowledge "client and server talk over three consecutive ports" into our client-server protocol. Another way would be to get the two missing endpoints from the server. We'll take the simplest way, which is:
- The server state router (ROUTER) is at port P.
- The server updates publisher (PUB) is at port P + 1.
- The server updates subscriber (SUB) is at port P + 2.
The clone class has the same structure as the flcliapi class from Chapter Four. It consists of two parts:
- An asynchronous clone agent that runs in a background thread. The agent handles all network I/O, talking to servers in real-time, no matter what the application is doing.
- A synchronous 'clone' class which runs in the caller's thread. When you create a clone object, that automatically launches an agent thread, and when you destroy a clone object, it kills the agent thread.
The frontend class talks to the agent class over an inproc 'pipe' socket. In C, the CZMQ thread layer creates this pipe automatically for us as it starts an "attached thread". This is a natural pattern for multithreading over ØMQ.
Without ØMQ, this kind of asynchronous class design would be weeks of really hard work. With ØMQ, it was a day or two of work. The results are kind of complex, given the simplicity of the Clone protocol it's actually running. There are some reasons for this. We could turn this into a reactor, but that'd make it harder to use in applications. So the API looks a bit like a key-value table that magically talks to some servers:
clone_t *clone_new (void);
void clone_destroy (clone_t **self_p);
void clone_connect (clone_t *self, char *address, char *service);
void clone_set (clone_t *self, char *key, char *value);
char *clone_get (clone_t *self, char *key);
So here is Model Six of the clone client, which has now become just a thin shell using the clone class:
"""
Clone server Model Six
"""
import random
import time
import zmq
from clone import Clone
SUBTREE = "/client/"
def main():
# Create and connect clone
clone = Clone()
clone.subtree = SUBTREE
clone.connect("tcp://localhost", 5556)
clone.connect("tcp://localhost", 5566)
try:
while True:
# Distribute as key-value message
key = "%d" % random.randint(1,10000)
value = "%d" % random.randint(1,1000000)
clone.set(key, value, random.randint(0,30))
time.sleep(1)
except KeyboardInterrupt:
pass
if __name__ == '__main__':
main()
C | Ada | Basic | C++ | C# | Clojure | CL | Erlang | F# | Felix | Go | Haskell | Haxe | Java | Lua | Node.js | Objective-C | ooc | Perl | PHP | Q | Racket | Ruby | Scala | Tcl
And here is the actual clone class implementation:
"""
clone - client-side Clone Pattern class
Author: Min RK <moc.liamg|krnimajneb#moc.liamg|krnimajneb>
"""
import logging
import threading
import time
import zmq
from zhelpers import zpipe
from kvmsg import KVMsg
# If no server replies within this time, abandon request
GLOBAL_TIMEOUT = 4000 # msecs
# Server considered dead if silent for this long
SERVER_TTL = 5.0 # secs
# Number of servers we will talk to
SERVER_MAX = 2
# basic log formatting:
logging.basicConfig(format="%(asctime)s %(message)s", datefmt="%Y-%m-%d %H:%M:%S",
level=logging.INFO)
# =====================================================================
# Synchronous part, works in our application thread
class Clone(object):
ctx = None # Our Context
pipe = None # Pipe through to clone agent
agent = None # agent in a thread
_subtree = None # cache of our subtree value
def __init__(self):
self.ctx = zmq.Context()
self.pipe, peer = zpipe(self.ctx)
self.agent = threading.Thread(target=clone_agent, args=(self.ctx,peer))
self.agent.daemon = True
self.agent.start()
# ---------------------------------------------------------------------
# Clone.subtree is a property, which sets the subtree for snapshot
# and updates
@property
def subtree(self):
return self._subtree
@subtree.setter
def subtree(self, subtree):
"""Sends [SUBTREE][subtree] to the agent"""
self._subtree = None
self.pipe.send_multipart(["SUBTREE", subtree])
def connect(self, address, port):
"""Connect to new server endpoint
Sends [CONNECT][address][port] to the agent
"""
self.pipe.send_multipart(["CONNECT", address, str(port)])
def set(self, key, value, ttl=0):
"""Set new value in distributed hash table
Sends [SET][key][value][ttl] to the agent
"""
self.pipe.send_multipart(["SET", key, value, str(ttl)])
def get(self, key):
"""Lookup value in distributed hash table
Sends [GET][key] to the agent and waits for a value response
If there is no clone available, will eventually return None.
"""
self.pipe.send_multipart(["GET", key])
try:
reply = self.pipe.recv_multipart()
except KeyboardInterrupt:
return
else:
return reply[0]
# =====================================================================
# Asynchronous part, works in the background
# ---------------------------------------------------------------------
# Simple class for one server we talk to
class CloneServer(object):
address = None # Server address
port = None # Server port
snapshot = None # Snapshot socket
subscriber = None # Incoming updates
expiry = 0 # Expires at this time
requests = 0 # How many snapshot requests made?
def __init__(self, ctx, address, port, subtree):
self.address = address
self.port = port
self.snapshot = ctx.socket(zmq.DEALER)
self.snapshot.linger = 0
self.snapshot.connect("%s:%i" % (address,port))
self.subscriber = ctx.socket(zmq.SUB)
self.subscriber.setsockopt(zmq.SUBSCRIBE, subtree)
self.subscriber.connect("%s:%i" % (address,port+1))
self.subscriber.linger = 0
# ---------------------------------------------------------------------
# Simple class for one background agent
# States we can be in
STATE_INITIAL = 0 # Before asking server for state
STATE_SYNCING = 1 # Getting state from server
STATE_ACTIVE = 2 # Getting new updates from server
class CloneAgent(object):
ctx = None # Own context
pipe = None # Socket to talk back to application
kvmap = None # Actual key/value dict
subtree = '' # Subtree specification, if any
servers = None # list of connected Servers
state = 0 # Current state
cur_server = 0 # If active, index of server in list
sequence = 0 # last kvmsg procesed
publisher = None # Outgoing updates
def __init__(self, ctx, pipe):
self.ctx = ctx
self.pipe = pipe
self.kvmap = {}
self.subtree = ''
self.state = STATE_INITIAL
self.publisher = ctx.socket(zmq.PUSH)
self.router = ctx.socket(zmq.ROUTER)
self.servers = []
def control_message (self):
msg = self.pipe.recv_multipart()
command = msg.pop(0)
if command == "CONNECT":
address = msg.pop(0)
port = int(msg.pop(0))
if len(self.servers) < SERVER_MAX:
self.servers.append(CloneServer(self.ctx, address, port, self.subtree))
self.publisher.connect("%s:%i" % (address,port+2))
else:
logging.error("E: too many servers (max. %i)", SERVER_MAX)
elif command == "SET":
key,value,sttl = msg
ttl = int(sttl)
self.kvmap[key] = value
# Send key-value pair on to server
kvmsg = KVMsg(0, key=key, body=value)
kvmsg["ttl"] = ttl
kvmsg.send(self.publisher)
elif command == "GET":
key = msg[0]
value = self.kvmap.get(key, '')
self.pipe.send(value)
# ---------------------------------------------------------------------
# Asynchronous agent manages server pool and handles request/reply
# dialog when the application asks for it.
def clone_agent(ctx, pipe):
agent = CloneAgent(ctx, pipe)
server = None
while True:
poller = zmq.Poller()
poller.register(agent.pipe, zmq.POLLIN)
poll_timer = None
server_socket = None
if agent.state == STATE_INITIAL:
# In this state we ask the server for a snapshot,
# if we have a server to talk to…
if agent.servers:
server = agent.servers[agent.cur_server]
logging.info ("I: waiting for server at %s:%d…",
server.address, server.port)
if (server.requests < 2):
server.snapshot.send_multipart(["ICANHAZ?", agent.subtree])
server.requests += 1
server.expiry = time.time() + SERVER_TTL
agent.state = STATE_SYNCING
server_socket = server.snapshot
elif agent.state == STATE_SYNCING:
# In this state we read from snapshot and we expect
# the server to respond, else we fail over.
server_socket = server.snapshot
elif agent.state == STATE_ACTIVE:
# In this state we read from subscriber and we expect
# the server to give hugz, else we fail over.
server_socket = server.subscriber
if server_socket:
# we have a second socket to poll:
poller.register(server_socket, zmq.POLLIN)
if server is not None:
poll_timer = 1e3 * max(0,server.expiry - time.time())
# ------------------------------------------------------------
# Poll loop
try:
items = dict(poller.poll(poll_timer))
except:
raise # DEBUG
break # Context has been shut down
if agent.pipe in items:
agent.control_message()
elif server_socket in items:
kvmsg = KVMsg.recv(server_socket)
# Anything from server resets its expiry time
server.expiry = time.time() + SERVER_TTL
if (agent.state == STATE_SYNCING):
# Store in snapshot until we're finished
server.requests = 0
if kvmsg.key == "KTHXBAI":
agent.sequence = kvmsg.sequence
agent.state = STATE_ACTIVE
logging.info ("I: received from %s:%d snapshot=%d",
server.address, server.port, agent.sequence)
else:
kvmsg.store(agent.kvmap)
elif (agent.state == STATE_ACTIVE):
# Discard out-of-sequence updates, incl. hugz
if (kvmsg.sequence > agent.sequence):
agent.sequence = kvmsg.sequence
kvmsg.store(agent.kvmap)
action = "update" if kvmsg.body else "delete"
logging.info ("I: received from %s:%d %s=%d",
server.address, server.port, action, agent.sequence)
else:
# Server has died, failover to next
logging.info ("I: server at %s:%d didn't give hugz",
server.address, server.port)
agent.cur_server = (agent.cur_server + 1) % len(agent.servers)
agent.state = STATE_INITIAL
C | Ada | Basic | C++ | C# | Clojure | CL | Erlang | F# | Felix | Go | Haskell | Haxe | Java | Lua | Node.js | Objective-C | ooc | Perl | PHP | Q | Racket | Ruby | Scala | Tcl
Finally, here is the sixth and last model of the clone server:
"""
Clone server Model Six
Author: Min RK <moc.liamg|krnimajneb#moc.liamg|krnimajneb
"""
import logging
import time
import zmq
from zmq.eventloop.ioloop import PeriodicCallback
from zmq.eventloop.zmqstream import ZMQStream
from bstar import BinaryStar
from kvmsg import KVMsg
from zhelpers import dump
# simple struct for routing information for a key-value snapshot
class Route:
def __init__(self, socket, identity, subtree):
self.socket = socket # ROUTER socket to send to
self.identity = identity # Identity of peer who requested state
self.subtree = subtree # Client subtree specification
def send_single(key, kvmsg, route):
"""Send one state snapshot key-value pair to a socket"""
# check front of key against subscription subtree:
if kvmsg.key.startswith(route.subtree):
# Send identity of recipient first
route.socket.send(route.identity, zmq.SNDMORE)
kvmsg.send(route.socket)
class CloneServer(object):
# Our server is defined by these properties
ctx = None # Context wrapper
kvmap = None # Key-value store
bstar = None # Binary Star
sequence = 0 # How many updates so far
port = None # Main port we're working on
peer = None # Main port of our peer
publisher = None # Publish updates and hugz
collector = None # Collect updates from clients
subscriber = None # Get updates from peer
pending = None # Pending updates from client
primary = False # True if we're primary
master = False # True if we're master
slave = False # True if we're slave
def __init__(self, primary=True, ports=(5556,5566)):
self.primary = primary
if primary:
self.port, self.peer = ports
frontend = "tcp://*:5003"
backend = "tcp://localhost:5004"
self.kvmap = {}
else:
self.peer, self.port = ports
frontend = "tcp://*:5004"
backend = "tcp://localhost:5003"
self.ctx = zmq.Context.instance()
self.pending = []
self.bstar = BinaryStar(primary, frontend, backend)
self.bstar.register_voter("tcp://*:%i" % self.port, zmq.ROUTER, self.handle_snapshot)
# Set up our clone server sockets
self.publisher = self.ctx.socket(zmq.PUB)
self.collector = self.ctx.socket(zmq.SUB)
self.collector.setsockopt(zmq.SUBSCRIBE, b'')
self.publisher.bind("tcp://*:%d" % (self.port + 1))
self.collector.bind("tcp://*:%d" % (self.port + 2))
# Set up our own clone client interface to peer
self.subscriber = self.ctx.socket(zmq.SUB)
self.subscriber.setsockopt(zmq.SUBSCRIBE, b'')
self.subscriber.connect("tcp://localhost:%d" % (self.peer + 1))
# Register state change handlers
self.bstar.master_callback = self.become_master
self.bstar.slave_callback = self.become_slave
# Wrap sockets in ZMQStreams for IOLoop handlers
self.publisher = ZMQStream(self.publisher)
self.subscriber = ZMQStream(self.subscriber)
self.collector = ZMQStream(self.collector)
# Register our handlers with reactor
self.collector.on_recv(self.handle_collect)
self.flush_callback = PeriodicCallback(self.flush_ttl, 1000)
self.hugz_callback = PeriodicCallback(self.send_hugz, 1000)
# basic log formatting:
logging.basicConfig(format="%(asctime)s %(message)s", datefmt="%Y-%m-%d %H:%M:%S",
level=logging.INFO)
def start(self):
# start periodic callbacks
self.flush_callback.start()
self.hugz_callback.start()
# Run bstar reactor until process interrupted
try:
self.bstar.start()
except KeyboardInterrupt:
pass
def handle_snapshot(self, socket, msg):
"""snapshot requests"""
if msg[1] != "ICANHAZ?" or len(msg) != 3:
logging.error("E: bad request, aborting")
dump(msg)
self.bstar.loop.stop()
return
identity, request = msg[:2]
if len(msg) >= 3:
subtree = msg[2]
# Send state snapshot to client
route = Route(socket, identity, subtree)
# For each entry in kvmap, send kvmsg to client
for k,v in self.kvmap.items():
send_single(k,v,route)
# Now send END message with sequence number
logging.info("I: Sending state shapshot=%d" % self.sequence)
socket.send(identity, zmq.SNDMORE)
kvmsg = KVMsg(self.sequence)
kvmsg.key = "KTHXBAI"
kvmsg.body = subtree
kvmsg.send(socket)
def handle_collect(self, msg):
"""Collect updates from clients
If we're master, we apply these to the kvmap
If we're slave, or unsure, we queue them on our pending list
"""
kvmsg = KVMsg.from_msg(msg)
if self.master:
self.sequence += 1
kvmsg.sequence = self.sequence
kvmsg.send(self.publisher)
ttl = kvmsg.get('ttl')
if ttl is not None:
kvmsg['ttl'] = time.time() + ttl
kvmsg.store(self.kvmap)
logging.info("I: publishing update=%d", self.sequence)
else:
# If we already got message from master, drop it, else
# hold on pending list
if not self.was_pending(kvmsg):
self.pending.append(kvmsg)
def was_pending(self, kvmsg):
"""If message was already on pending list, remove and return True.
Else return False.
"""
found = False
for idx, held in enumerate(self.pending):
if held.uuid == kvmsg.uuid:
found = True
break
if found:
self.pending.pop(idx)
return found
def flush_ttl(self):
"""Purge ephemeral values that have expired"""
if self.kvmap:
for key,kvmsg in self.kvmap.items():
self.flush_single(kvmsg)
def flush_single(self, kvmsg):
"""If key-value pair has expired, delete it and publish the fact
to listening clients."""
if kvmsg.get('ttl', 0) <= time.time():
kvmsg.body = ""
self.sequence += 1
kvmsg.sequence = self.sequence
kvmsg.send(self.publisher)
del self.kvmap[kvmsg.key]
logging.info("I: publishing delete=%d", self.sequence)
def send_hugz(self):
"""Send hugz to anyone listening on the publisher socket"""
kvmsg = KVMsg(self.sequence)
kvmsg.key = "HUGZ"
kvmsg.body = ""
kvmsg.send(self.publisher)
# ---------------------------------------------------------------------
# State change handlers
def become_master(self):
"""We're becoming master
The backup server applies its pending list to its own hash table,
and then starts to process state snapshot requests.
"""
self.master = True
self.slave = False
# stop receiving subscriber updates while we are master
self.subscriber.stop_on_recv()
# Apply pending list to own kvmap
while self.pending:
kvmsg = self.pending.pop(0)
self.sequence += 1
kvmsg.sequence = self.sequence
kvmsg.store(self.kvmap)
logging.info ("I: publishing pending=%d", self.sequence)
def become_slave(self):
"""We're becoming slave"""
# clear kvmap
self.kvmap = None
self.master = False
self.slave = True
self.subscriber.on_recv(self.handle_subscriber)
def handle_subscriber(self, msg):
"""Collect updates from peer (master)
We're always slave when we get these updates
"""
if self.master:
logging.warn("received subscriber message, but we are master %s", msg)
return
# Get state snapshot if necessary
if self.kvmap is None:
self.kvmap = {}
snapshot = self.ctx.socket(zmq.DEALER)
snapshot.linger = 0
snapshot.connect("tcp://localhost:%i" % self.peer)
logging.info ("I: asking for snapshot from: tcp://localhost:%d",
self.peer)
snapshot.send_multipart(["ICANHAZ?", ''])
while True:
try:
kvmsg = KVMsg.recv(snapshot)
except KeyboardInterrupt:
# Interrupted
self.bstar.loop.stop()
return
if kvmsg.key == "KTHXBAI":
self.sequence = kvmsg.sequence
break # Done
kvmsg.store(self.kvmap)
logging.info ("I: received snapshot=%d", self.sequence)
# Find and remove update off pending list
kvmsg = KVMsg.from_msg(msg)
# update integer ttl -> timestamp
ttl = kvmsg.get('ttl')
if ttl is not None:
kvmsg['ttl'] = time.time() + ttl
if kvmsg.key != "HUGZ":
if not self.was_pending(kvmsg):
# If master update came before client update, flip it
# around, store master update (with sequence) on pending
# list and use to clear client update when it comes later
self.pending.append(kvmsg)
# If update is more recent than our kvmap, apply it
if (kvmsg.sequence > self.sequence):
self.sequence = kvmsg.sequence
kvmsg.store(self.kvmap)
logging.info ("I: received update=%d", self.sequence)
def main():
import sys
if '-p' in sys.argv:
primary = True
elif '-b' in sys.argv:
primary = False
else:
print "Usage: clonesrv6.py { -p | -b }"
SystemExit(1)
clone = CloneServer(primary)
clone.start()
if __name__ == '__main__':
main()
C | Ada | Basic | C++ | C# | Clojure | CL | Erlang | F# | Felix | Go | Haskell | Haxe | Java | Lua | Node.js | Objective-C | ooc | Perl | PHP | Q | Racket | Ruby | Scala | Tcl
This main program is only a few hundred lines of code, but it took some time to get working. To be accurate, building Model Six took about a full week of "sweet god, this is just too complex for the Guide" hacking. We've assembled pretty much everything and the kitchen sink into this small application. We have fail-over, ephemeral values, subtrees, and so on. What surprised me was that the upfront design was pretty accurate. But the details of writing and debugging so many socket flows is something special. Here's how I made this work:
- By using reactors (bstar, on top of zloop), which remove a lot of grunt-work from the code, and leave what remains simpler and more obvious. The whole server runs as one thread, so there's no inter-thread weirdness going on. Just pass a structure pointer ('self') around to all handlers, which can do their thing happily. One nice side-effect of using reactors is that code, being less tightly integrated into a poll loop, is much easier to reuse. Large chunks of Model Six are taken from Model Five.
- By building it piece by piece, and getting each piece working properly before going onto the next one. Since there are four or five main socket flows, that meant quite a lot of debugging and testing. I debug just by printing stuff to the console (e.g. dumping messages). There's no sense in actually opening a debugger for this kind of work.
- By always testing under Valgrind, so that I'm sure there are no memory leaks. In C this is a major concern, you can't delegate to some garbage collector. Using proper and consistent abstractions like kvmsg and CZMQ helps enormously.
I'm sure the code still has flaws which kind readers will spend weekends debugging and fixing for me. I'm happy enough with this model to use it as the basis for real applications.
To test the sixth model, start the primary server and backup server, and a set of clients, in any order. Then kill and restart one of the servers, randomly, and keep doing this. If the design and code is accurate, clients will continue to get the same stream of updates from whatever server is currently master.
After this much work to build reliable pub-sub, we want some guarantee that we can safely build applications to exploit the work. A good start is to write-up the protocol. This lets us make implementations in other languages and lets us improve the design on paper, rather than hands-deep in code.
Here, then, is the Clustered Hashmap Protocol, which "defines a cluster-wide key-value hashmap, and mechanisms for sharing this across a set of clients. CHP allows clients to work with subtrees of the hashmap, to update values, and to define ephemeral values."
I'll end this chapter with a fun little machine that exploits the zmq_proxy(3) method to show you what's happening on a pub-sub network. It's deceptively simple:
C | Python | Ada | Basic | C++ | C# | Clojure | CL | Erlang | F# | Felix | Go | Haskell | Haxe | Java | Lua | Node.js | Objective-C | ooc | Perl | PHP | Q | Racket | Ruby | Scala | Tcl