What I learned at DrupalCamp Montreal

Last weekend I attended DrupalCamp Montreal. It was a great experience. The videos of all the talks are already up so you don’t have to take my word for it.  Still, I thought I’d summarize the key points that I took home from it.

use the modules that the developers love
Drupal developers use and love git and drush for their development needs and it sounds like many of them also use git + the Features module for their development work.

One developer admitted that upgrading Drupal sites up various versions is such a pain, that he usually just starts a current Drupal version and uses the Feeds and Features modules to bring in old content and settings. His talk, Feeding Drupal, really opened my eyes to how powerful Feeds can be. For example, you can use Firebug to copy the xpath of an element on a page and using it in Feeds and then create a regular import of that element. Translation: you can set up a feed of, for example, the list of references from a particular Wikipedia page even through there’s no RSS feed for such a collection of elements. Very very cool.

The other module session I really learned a lot from was the one on Webforms (webcast). For some reason, our IT department doesn’t want to install webform in their own instance of Drupal but I can’t see any reason not to use it.  Perhaps it’s a security issue? On that note, I’m going to get a copy of Cracking Drupal as it was heavily recommended in the Drupal Security session (webcast), to make sure we’ve locked down.

responsive web design
Speaking of Firebug, now that I have been delving into CSS, I know understand why web developers love it so. And that gets to my next take-away from DrupalCamp: CSS3 is your new best friend and this new friend will introduce you to new best friends like Responsive Web Design.  Responsive Web Design is a collection of design concepts that allow a single site to be served up to mobile phones, tablets and monster-sized desktop monitors and still look good.  The Boston Globe is an example of responsive web design – try loading up the page and then shrinking your browser. I’ve already ordered the Responsive Web Design book for the library from A Book Apart, as well as their book on CSS3 and HTML5.

doing mobile in Drupal
There are some Drupal themes built around the mobile first concepts of responsive design and the designer of the most popular of such themes was present to showcase his work: the Omega Theme.  (Omega looks wonderful but it’s probably too complex at the moment for our library’s relatively simple needs.)

An alternative approach is to build specifically for “touch” apps using jQuery mobile but, as I learned from a talk at DrupalCamp, that it’s best to hold on this approach until differences in php interpretations are worked out between jQuery mobile and Drupal.  I still haven’t delved in jQuery properly, but I’m beginning to understand why it’s so popular with developers. Not only is it’s library very small in size but that it is vigorously tested on all the browsers, and in the case of jQuery mobile, on a ridiculous number of devices.

html5 will melt your brain
Jen Simmons keynote on the potential of HTML5 (webcast is here) makes a strong case that we are about to experience another quantum leap in web experience not unlike the change from the static html pages of 1997 to the dynamic world of Web 2.0. That is – when  designers begin to understand what HTML5 able browsers are capable of: off-site storage, APIs and mysql-like tables. In other words it that websites can act more like apps that can be used offline if need be. Improved synching from the server to the browser can also be exploited in a multitude of ways: live multi-player gaming, dynamic updates that don’t require browser refreshing, and doing such things like holding a webcast in a browser.  In other words, with HTML5 we should think about building library software for our users: not just  library website. See what I mean? Brain-melting.

The present and future of Drupal
One of my favourite talks was from Angie “webchick” Byron. Like her, it has taken me ten years to go from “open source cheerleader” to “open source contributor” (well, supporter may be a better word as I have yet to make my first commit).  The strength of Drupal is not the code but the community and she helped me better understand how it works.

One of the goals of Drupal8 is to have it be use-able right out of the box.  Right now, Drupal more like a box of Lego’s that you have to put together yourself.  Wouldn’t it be great if, when Drupal8 came round, there was a “library frame-work” that libraries could download and then modify for their community? Just musing out loud but I think that could be a really meaningful sabbatical project.

But that’s is far in the future. Right now, I need hustle and get our A to Z list ready for prime-time. It might be ready as soon as next week!

Posted in process | Leave a comment

The A to Z of the A to Z list

I know things seem quiet on the Leddy Library website front, but trust me, we are working away, getting our backend ready [insert bootylicious joke here], and working out interface ideas.

One bit of advice that floats around those occupied with User Experience is to avoid completely reinventing an interface when your users already use and understand something similar.  Designers of academic library websites should look to current non-academic library websites to borrow what works from elsewhere. So let’s do that.

Here’s a screenshot of our existing A to Z list:

UWIN a to z

So, what popular websites serve up links? How about search engines?

google search results

And here’s Bing : almost the same results but the URL comes after the description:

bing search results

So the elements displayed are: title, url and one or two line description. And I found a library that has seemed to take this design cue to heart. This is from MIT:

What’s interesting about this particular set-up is that it appears that MIT has gotten rid of the “database description” field/page that characterizes almost all other academic library a to z lists of databases.

While “database descriptions pages” are not well-used, I’m still not entirely convinced that we should get rid of them entirely at this point.  In our Drupal set-up we are conceiving that each database will be “a node” containing its URL, descriptions, subjects, and tags that reflect coverage data and subjects and the A to Z list will be a view of these nodes.

I do think we can do better than the ‘i’ icon that we currently use.  I thinkmuch would be improved just by swapping out the icon with the text of ‘info’ like UNC does (although I think in the parlance of the Internet, it should read [more])

I have two other design features that I’m working out in my mind whether they should be pursued or not at this point or not. As mentioned before, I’m curious whether logos distract or enhance the experience of browsing a long list. The University of Rochester does this and I’m still not sure whether it works or not (but I do know I’m not a fan of the “lock” icon).

Notice that they use ‘more’ instead of ‘info’. Maybe we could split the difference and do what the New York Public Library does and use ‘more info’.

The other design element that I would like to explore is the use of tags to describe the databases, not unlike delicious:

I think I’ll put this thought away because we really need to get going on building an A to Z list and introducing tags would essentially create many different lists of content.

Posted in design | Leave a comment

reading on combined search results

The fine folks at the University of Michigan recently posted a survey of library website search results, with summaries and screenshots.



Posted in required reading | Leave a comment

Drupal 7 and RDF

I made a bit of a breakthrough in my own understanding of RDF and Drupal 7 a couple weeks ago and I’ve written about it in my personal blog here.

After I reported to the web team last week that Drupal automatically generates FOAF profiles for every Drupal user, we talked briefly about establishing usernames on Drupal as the same as our institutional email usernames to keep things unique and linkable.

Sometime soon, I intend to read a bit more about semantic drupal and wrap my brain around Drupal 7 and RDF.

At the moment, I’m curious whether it’s easier to import RDF  rather than nodes between various Drupal instillation. If you have any insight on the easiest way to import and export content from one Drupal installation to another, let us know!

Posted in design | Leave a comment

Adding Google Analytics to our SFX menu

For a lark, we added some Google Analytics code to our SFX menu, to see if we could make use of Google’s visualization tools for use-traffic. The good news is that it worked! Each request for an SFX menu can now be easily captured and graphed over time.

The bad news is that without a unique “page title” for each resource, the information can’t be easily parsed out. We can see each individual sfx menu from the Google Analytics page but we can’t chart the info or group the links into meaningful collections.

I don’t think we can swap out the SFX Menu’s title with a variable that gives us more idea of the target or source of the link. Regardless, we’re going to keep collecting the data, and perhaps in time, we will export it all into Google Refine to see if we can clean it up and make it really useful.

Posted in process | Leave a comment

Discoveries about Discovery – moving panes, feeding solr, and going public about google

A couple of reactions have come forward on jamun, our attempt to “search the web site and everything else” project.  For the most part, the comments have been quite positive, but some significant modifications have also been suggested.

jamun - sample searchThe “pane” approach generally seems to work well, but jamun may overdo it. Although we adjust for the number of results in a pane, we don’t attempt to calculate font metrics or other measures that would keep the panes proportional to each other. This often leads to the kind of displays shown at the left. Even with large monitors, the reality is that the columns stretch in a way that sometimes clobbers the panes at the bottom.

There is an interesting discussion on the NGC Mailing List on the next generation of discovery tools and David Walker made a plug for the NCSU model that we have adopted. Steven Morris offered a rationale that hit a lot of bases for what we want to accomplish:

“Although the search tool can often provide immediate gratification by way of the abbreviated result sets for each silo, another objective is stealth instruction:  using taste results to help the user navigate our silos and decide which to dive into for further searching, to then take advantage of whatever advantages (functionality, etc.) the silo-specific discovery environment has to offer.”

Our next steps on the layout are to change the sizing when panes are closed and revamp the columns by content type. We will stay with a three column design for now, but each column will attempt to reflect the type of material it brings forward.

jamun layoutThis means giving article searching the coveted left hand column with the most pixels and paging options, and putting a mish-mash of content into the middle. With solr indexes, we can combine some of the results so that WinSpace (our ETD repository) and SWODA (our historical collection) can share a pane. We anticipate that WinSpace will become a more full-fledged repository in Islandora and that we can try to blend in research data, so that the “other” column might be better characterized as “unpublished” content. A pane for the web site itself would be on the top and the idea would be for each column to have no more than two panes if possible.

That leaves  conifer (our catalogue), Scholar Portal E-Books, and the option to “search inside the book” via Google Books in the right-hand column. Dan Scott came up with a brillant strategy for keeping a table in evergreen up to date with the necessary field content. The field representations to be used in a solr index are then in one easy to use place with negliable overhead. In turn, solr’s data import handler can take care of keeping the index current.

This combination seems like a great option for leveraging the agility of evergreen and the indexing power of solr. For example, adjusting relevency based on date and possibly blending in full text content. Mixing full text content with metadata in particular requires a lot of configuration options and solr seems to be the best option out there for uniting the two types of materials. (BTW, solr already has a replication feature, and the folks at Scholars Portal are already maintaining a solr index, maybe this combo could be achieved without even requiring much local indexing?)

Using a bookshelf in Google Books for searching inside the book may be the most peculiar part of the puzzle, and we have gone to the source itself to ask about hidden limitations. This google forum question hasn’t received much attention yet but the potential benefit could be substantial if allowed. The code for building a large shelf is also on github, jamun itself is soon to follow.

Finally, we posted before about using a Google CSE for article content, but one interesting development in this space comes from Microsoft, which seems to be very close to releasing the details about its API. Add in the already formidable API from Scholars Portal, and we may soon need a discovery layer just to navigate among the discovery options.

Posted in design | Leave a comment

The Pareto principle and the subject page

Today I made a presentation to the Leddy Library’s Information Services Department, summarizing some statistics that we had gathered about the Leddy Library website over the last year or so.

This presentation’s focus was quite narrow: How many links make up 80% of the use from the Leddy Library’s Subject Resources pages? And the answer is below:

80 % of the use come from…
• 3 links Psychology top link 64%
• 7 links Nursing top link 43%
• 5 links Sociology top link 58%
• 32 links Business top link 30%
• 5 links Political Science top link 51%
• 10 links Engineering top link 34%
• 10 links Biology top link 26%
• 10 links History top link 27%
• 21 links English top link 16%
• 3 links Social Work top link 49%
• 5 links Human Kinetics top link 53%
• 10 links Chemistry top link 25%
• 5 links Classics top link 25%
• 6 links Comm Studies top link 35%
• 5 links Comp Science top link 27%
• 10 links Dramatic Art top link 15%
• 5 links Earth Science top link 26%
• 6 links Economics top link 31%
• 4 links Education top link 51%
• 5 links French top link 35%
• 2 links Labour Studies top link 77%
• 4 links Mathematics top link 49%
• 6 links Music top link 37%
• 4 links Philosophy top link 46%
• 12 links Physics top link 15%
• 9 links Visual Arts top link 38%

So for our next generation of our library’s website, its is possible that we going to present the 10 most used indexes and have the others available under a “more” link.

BTW, I have summarized some of the broader trends gleaned rom our library’s website in this previous post and presentation.

Posted in design, process | Leave a comment

The Library Discovery-Layer Product Selection Committee

We are amused.

(via Metadata)

Posted in process | Leave a comment

Screencast : an Introduction to the Link Module

Another <5 minute screencast : Introduction_to_Link_Module

This was the answer to last week’s design question: How can we create a list of links from a list of nodes?


Posted in modules | Leave a comment

The next design problem: how to create lists of links from a list of nodes

At the moment, the Leddy Library web team is tackling “design problems” to get a better feel of what sort of architecture we need to build.  We are concentrating on how to replace our current A to Z list of indexes and databases (currently handled with Zope) .

Ideally, each index would be a node, like a more complex version of the image below:

We could add many more fields to this description (such as coverage dates, title lists, open access designation, licensing info, logo) and show different views of this information using Drupal’s Views.

In the image above,  the important field that still needs to be added is the “link” to the resource ( e.g. http://ezproxy.uwindsor.ca/login?url=http://search.ebscohost.com/login.aspx?authtype=ip,url,uid&profile=ehost&defaultdb=a9h)

Putting aside the work that will go into how to separate the ez proxy link from the rest of the resources link, the problem we’re working on is how to create a Drupal View of links that come from a list of such nodes.

I’m not sure whether this is best handled within Views or by installing a module such as Link.

There’s some plumbing in the Drupal API about forming external links and there’s a difference reference that the Drupal API now incorporates some of the elements from the Token module (which we were considering for our EZ proxy link prefix) but to be perfectly honest, I don’t know how the API is supposed to be used from within Drupal.


Posted in design, process | 1 Comment