Articles Whither AMQP

Where is AMQP Going?

If you’ve used AMQP in the past, you might be surprised when the AMQP workgroup publishes AMQP/1.0 in 2011 or 2012.

Your trusted exchange, queue, and binding model is no longer valid. “Links” have taken its place. Your code will need to be revised. You should start over if you wrote an API stack. If you have any code that was written using the previous AMQP paradigm, remove it and start over.

In September 2008, JPMorganChase and RedHat unveiled their new proposals for AMQP/1.0 without running code or agreement from other players on the issues. It was suggested by iMatix and others that the exchange-queue-binding approach should not be abandoned. We claimed that rewriting too much code would require too many personnel (like iMatix and all of our users). Numerous issues with AMQP that need to be fixed have already been mentioned, such as the binary command framing that guarantees protocol fragmentation. Since the AMQP/1.0 draught was confidential and could not be discussed in public, these early AMQP/1.0 conversations were held off-list. We weren’t able to publicly reveal several aspects of this work until much later.
Because people who were promoting AMQP/1.0 never made it apparent what was wrong with AMQP/0.9.1, we were unaware of the problems.

There is still no historical record that explains the problems with AMQP/0.9.1 and why AMQP/1.0 is the best remedy for those problems.

However, John O’Hara, the working group’s head and the original inventor of the AMQP/1.0 concept, ultimately provided an explanation in a public email in May 2010. What was happening was that delayed subscribers were causing JPMorganChase’s systems to crash (or become very difficult to maintain). JPMC and RedHat discussed the issue and came up with a solution. Their solution was to make a new model and discard the old one.

The bottom line is this: From the perspective of Exchange/Binding, links seem extremely weird.
So why are Links even necessary?
They result from a meeting that Rob, Rafi, Carl, and I had in a dark room in the basement of 60 Victoria Embankment back at JPM in London.
Links were made to allow for a declarative approach to the distribution of events to numerous consumers while *admitting* (to use your lingo) a variety of methods for buffering awaiting messages for slower consumers.
Prior to the client picking them up, pending messages on the broker were buffered in the private reply queue. When consumers keep up with the message flow, this works well.
State was built up on the broker if there were many sluggish subscribers. Memory is not a major concern because the implementation uses references to messages. However, the operational personnel had complained that they lacked a method to view the “topic” and determine how far each client was behind the event stream in terms of catching up. . After that, operational staff can see how each subscriber is doing in relation to the volume of the event stream.
So, we’ve been in a pickle. Both strategies—a single buffer with many pointers and i) an exchange routing messages into several output buffers—are equally viable. An implementation now has the option of executing that request by converting it into exchanges and bindings or by using a single queue with a number of tail pointers.

I’m done now.

Every AMQP user and developer will have to start over due to an implementation issue (how to prevent servers from running out of memory and crashing).

Consider this for a moment. I won’t go on a tirade about how developers shouldn’t attempt to create protocols. The issue is how to deal with delayed customers, and the solution is to “throw away AMQP and start again,” I’ll simply say that.

There are other changes in AMQP/1.0, but this one will have the biggest negative impact on everyone.

Perhaps engineering for its own sake. As for RedHat, they have always used a change, command, control approach to AMQP. A great approach for RedHat to get control of AMQP and the AMQP industry is by imposing drastic, disastrous changes on the protocol. RedHat’s decision to patent AMQP features like XML-based routing is consistent with this long-term approach.

Ironically, we previously discussed our approach to this exact issue in January 2009. In December 2008, we posted the Direct Mode solution on the AMQP wiki. Don’t alter your protocol when your server breaks because it has too many messages stored for slow subscribers. Send the mails to the subscribers immediately. Keep private queues off of the main servers! Shared queues are all that are required to handle slow consumers, and they are excellent at doing so.
The Random Industrial Average and other indices are cheerfully pushed data for by OpenAMQ, where we implemented this at A Random Indices Company. Direct Mode is a straightforward extension protocol that is fully backwards compatible; it does not call for a new AMQP.

But for us, that is essentially history. We’ve come to the conclusion that AMQP will fail. Large companies will utilise it, for sure, but the open source community won’t overlook the insult of arbitrary protocol modifications occurring for no apparent reason. In this regard, iMatix is merely a typical open source team: tiny, meritocratic, and endowed with a long memory. Furthermore, 0MQ is considerably easier to use and, in the end, accomplishes roughly the same task as AMQP. only more quickly and under less pressure. Naturally, 0MQ places queues next to the consumer, where they belong, and feeds them elegantly using asynchronous I/O.

Therefore, if you are currently using AMQP, look into AMQP/1.0 and request an explanation from the AMQP working group for the modifications. Simply ask yourself: Why?

Then, when you receive a confusing response or no response at all, download 0MQ to observe what messaging can appear like when it is done correctly.

Ask ChatGPT
Set ChatGPT API key
Find your Secret API key in your ChatGPT User settings and paste it here to connect ChatGPT with your Tutor LMS website.