Monday, November 22, 2010

Enterprise Rhaptos 1.6 Released


Enterprise Rhaptos 1.6 has been released. It contains the new Collection Composer recently added to cnx.org as well as a handful of bug fixes.

See the upgrade instructions and the list of updated products for more details.

Friday, November 19, 2010

HA_Proxy load balancing leads to better Connexions performance


We have two different performance milestones coming up at Connexions. One to improve performance of authoring by reducing the size of certain catalogs where lookup time is related to size, and one to improve viewing times by making our slowly changing content cacheable.

Getting ready for these performance milestones, we are updating our basic request architecture and we have made some nice performance improvements in the average times for viewing content and using the authoring system. And this is before starting on the real performance work.

We were using Squid to load balance between front end zeo servers and we switched to using HA_Proxy for load balancing. Take a look at these graphs of average load times. The Y-axis shows the time to service a particular request. We measure a few different requests and each shows up in a different color. The graph on the left shows performance for the last month and the one on the right shows performance for the year. You should notice a dramatic drop in service times a week and a half ago when HA_Proxy was added for load-balancing.

The graph below shows the same timings, but for actions that authors take. It shows a similar speedup (lower height lines) for authoring. In the graphs for authoring you may also notice a major performance improvement in February of this year. That improvement was thanks to increasing the size of an application object cache.

So why would changing the load balancer have such a big benefit? We were surprised at the magnitude, but not the direction of the improvement. Squid isn't specifically designed to do load balancing, and we were using the Internet Cache Protocol (ICP) to approximate load balancing. Squid would ask frontends to respond to an ICP request and then use the speed at which they responded "no" to determine which one to choose for the request. HA_Proxy is designed to do load balancing. It did take some configuration, but it is working much better than trying to contort ICP for load balancing. Our settings for HA_Proxy choose the front end with the least number of current connections and then choose among equals using round-robin.

As background about our characteristics, Connexions serves about 2 million unique visitors per month. We receive between 50 and 60 requests per second, peaking at around 100 requests per second. Our performance is still spiky. Occasionally a request takes a very long time to serve, and that can be very frustrating for viewers and authors. We will continue to report on performance in the blog as we improve the infrastructure.

Thursday, November 11, 2010

New Collection Editor


Thanks to @hedleyroos and @rochecompaan out at Upfront Systems, we're delighted to have a shiny new collection editor. There's a separate post that gives a great overview of how to use it but in this post I'll go under the hood and describe a bit about the technologies used and how they were implemented. Finally, there are a couple of "gotchas" we ran into that are worth discussing.

The editor uses two JavaScript frameworks, ExtJS and jQuery, along with Zope Page Templates (ZPT). Zope is the templating engine used to implement Connexions. The editor uses AJAX (Asynchronous Javascript And XML) to call the server on Connexions. This eliminates the need to refresh the entire page as the old editor did. Users start off with an ExtJS tree that shows how the collection is organized. The tree is reorderable with links to add/delete Subcollections or modules. On load, the collection hierarchy is requested as JSON from a ZPT. JSON (JavaScript Object Notation) is a lightweight data-interchange format.

Every time a module or subcollection is dragged and dropped, a call is made to Zope to update its objects. Since these objects can be moved anywhere in the hierarchy they may get a new parent. To handle this in Zope, we cut, paste, and reorder.

Gotcha 1: Initially the Zope UID was used to keep the Javascript and Zope objects in sync. However, when an object is pasted the UID changes. In the code both the UID and path to the object (using ids) were used but in order to support moving the node to a different parent both would have to be updated so we switched to calculating the path on each update.

The editor also has popup dialog boxes for editing properties of the collection. When a button to open a dialog box is clicked an AJAX call is made to Zope and the resulting HTML is injected directly into the page. The HTML that is sent back also contains a Javascript tag that contains all the logic needed for the dialog. Sending back HTML from the server has the advantage of using the Zope internationalization machinery to support other languages and by keeping the Javascript in the same file it is easier to keep track of which code goes with which dialog box.

Gotcha 2: Those pesky corner cases (in form submission, cancellation, and validation).
Since we request HTML from Zope, very little state is stored on the browser. To support canceling a dialog box, we had to first store the default values. When a user enteres some invalid data and submits, Zope does a nice job of rendering the form with how to fix the problem. However, if the user cancels, we needed to scrub the form clean, removing the validation errors as well.

So, that's a quick run through the new Collection Editor and if you're interested in the code, it can be viewed at our trac page.

Monday, November 1, 2010

WebCraft Textbook -- be a part of building it this week.

The Connexions blog describes the web development textbook we are building on Connexions during the Mozilla Drumbeat festival this week. Folks on this mailing list would be excellent authors of chapters and sections for the book. Join us. -- Read more here -- http://blog.cnx.org/2010/11/webcraft-101-introduction-to-web.html