A jQuery Library for SharePoint Web Services (WSS 3.0 and MOSS): Real World Example – Part 4

11 minute read

Cross-posted from EndUserSharePoint.com

Part 1 Part 2 Part 3 Part 4

Way, way back in January, I wrote Part 3 of this series and said: 

“In my next article, I’ll show what the multiple document metadata entry page (EditFormBulk.aspx) looks like and how it works. Teaser: Yes, it uses jQuery and my jQuery Library for SharePoint Web Services to get its jobs done quite extensively, specifically the Lists Web Service’s operations GetListItems, UpdateListItems, and CheckInFile.” 

Then I just never got around to writing the article, though I’ve gotten regular questions about it since then. Chris Quick pinged me on it a little while back, and I figured it was finally time to bite the bullet and follow through with Part 4. So here it is! 

In Part 3, I showed how I passed a ridiculously long and complicated URL into Upload.aspx which allowed me to control the downstream behavior of both Upload.aspx and EditFormBulk.aspx. That URL looked something like this: 


I built that URL up in the DVWP by adding a bunch of values for the project for which we wanted to upload documents: 

<img alt="Upload" src="/sites/CSO/KR/PublishingImages/Upload.png"/><a href="/sites/CSO/KR/_layouts/Upload.aspx?List=%7B41BBE5C5%2D5321%2D4D07%2D8EFC%2D10B064F85E6E%7D&amp;RootFolder=%2Fsites%2FCSO%2FKR%2FSDLC%20Artifact%20Repository%202010%2F{translate($RequestID, ':', '-')}&amp;MultipleUpload=1&amp;Source=http://{$SERVER_NAME}{ddwrt:UrlDirName(string($PATH_INFO))}/SDLC%20Artifact%20Repository%202010/Forms/EditFormBulk.aspx?RootFolder=%2Fsites%2FCSO%2FKR%2FSDLC%20Artifact%20Repository%202010%2F{translate($RequestID, ':', '-')}%26ProjectInfo={$ProjectID}|{$RequestID}|{@Artifact_x0020_Name}|{$URL}">Upload</a>

I explained the details of how that URL is constructed in Part 3, so if you’d like to decipher it all, go back and check out that article. 

So that big, messy URL sends us to the Upload.aspx page, in the upload multiple documents mode. As I mentioned in Part 3, on some level it makes little sense to me that we have two separate page behaviors depending on whether we want to upload one document or more than one. So here, by passing MultipleUpload=1 to Upload.aspx, we *always* get the multiple document selection page, which gives the user some consistency. And of course, it also lets me treat the Upload.aspx page the same for all cases which makes my work a little bit easier. So IMHO, it’s a win-win situation. 

In case you don’t have a view of it handy, here’s what a multiple document upload page looks like. I’m not changing anything in this page, just “asking” it to pass along some values for me so that EditFormBulk.aspx can do some smart things. 


Once we select the documents we want to upload, the Source parameter on the Query String causes us to be redirected to the EditFormBulk.aspx page: 


Note that I’ve got some snazzy code in there so that this redirect will work whether we are on the test or production servers using IIS Server Variables

SERVER_NAME — The server’s host name, DNS alias, or IP address as it would appear in self-referencing URLs. 

PATH_INFO — Path information, as given by the client, for example, “/vdir/myisapi.dll/zip”. If this information comes from a URL, it is decoded by the server before it is passed to the CGI script or ISAPI filter. 

EditFormBulk.aspx looks something like this. Sorry about the low res screenshots, but they are all that I have! 


clip_image006Section A is simply a Content Editor Web Part (CEWP) with some text explaining how to use the page. Since this form isn’t what users are used to seeing in other Document Library upload situations, we wanted to give them a little help on how to use it. (If I were doing this now, I’d probably make this collapsible so that it wasn’t in the way for users who knew the drill already.) 

clip_image008Section B is the “money section”. This is where we really take advantage of all of the groundwork we’ve laid up to this point, allowing the user to work with the files they have just uploaded in a richer way than SharePoint provides out of the box. 

Each of the documents we’ve just uploaded is displayed using a Data View Web Part (DVWP). This is a multiple item form, which SharePoint Designer allows you to create simply with the configuration options. Then I took it from there with some script and styling. Let’s break this section down a little more. 

There are five columns on the page: 


I’m going to explain them in reverse order. It’ll probably make sense why as you read through. 

clip_image012The Applications column is a multi-select Lookup column. I’m using SPServices and the GetListItems operation from the Lists Web Service to look up what values should be set based on the current project, which is passed in the Query String. 


Then I have some more script to shorten the multi-select boxes as much as possible based on the number of values required. Though I’m setting the values up front, the user can override them here, so the column is editable. 

clip_image016The Audit Required column is simply a Choice column with three values. I use the SPArrangeChoices function from SPServices to switch them from the default vertical orientation to the horizontal orientation to help with screen real estate. (For those of you keeping score at home, the SPArrangeChoices function didn’t exist at the time; this is where I first came up with the idea and the code.) 

clip_image018The Artifact Type column is a Lookup column which I also set based on the parameter value on the Query String: 


clip_image022The Name column is simply the name of the uploaded document.

clip_image024Column 1 is the “Actions” column. This column is entirely of my own making, and gives the user a link for each document to accomplish what they need to do on the page. 

In that column, I’ve got this markup in the XSL: 

<td class="action" onmouseover="this.className = 'actionhover';" onmouseout="this.className = 'action';">
  <a id="checkInLink{$Pos}" href="#">
    <xsl:attribute name="sourceId">
      <xsl:value-of select="@ID"/>
    <xsl:attribute name="pageUrl">
      <xsl:value-of select="concat('http://', $SERVER_NAME, @FileRef)"/>
    <xsl:attribute name="fileRef">
      <xsl:value-of select="@FileRef"/>
    <xsl:attribute name="onclick">
      saveDocument(this, &apos;<xsl:value-of select="concat('http://', $SERVER_NAME, @FileRef)"/>&apos;);
      checkInDocument(this, &apos;<xsl:value-of select="concat('http://', $SERVER_NAME, @FileRef)"/>&apos;);
    Save and Check In</a>

All of that adds up to the Save and Check In link. The magic is in the attributes and the script which is called when the user clicks the link. (I’ll gloss over the formatting. Suffice it to say that there’s some CSS to make the link look more “button-like” just to spiff it up.) 

When the user clicks this button for an individual document, the saveDocument and checkInDocument functions are called in sequence. (We have to save the document in case there have been any changes before we can check it in.) Both of these functions use the SharePoint Web Services to do their thing. 


There’s a bunch of setup code to get to this which basically pulls the values from the DOM and assigns them to variables for this call to UpdateListItems.

  operation: "UpdateListItems",
  async: false,
  listName: "SDLC Artifact Repository 2010",
  updates: "<Batch OnError='Continue' RootFolder='/sites/CSO/kr/SDLC Artifact Repository 2010/" + requestId.replace(/:/gi, "-") + "/'>" +
      "<Method ID='1' Cmd='Update'>" +
        "<Field Name='ID'>" + $(obj).attr("sourceId") + "</Field>" +
        "<Field Name='FileRef'>" + $(obj).attr("fileRef") + "</Field>" +
        "<Field Name='Title'>" + requestId + "</Field>" +
        "<Field Name='ArtifactType'>" + artifactTypeSet + "</Field>" +
        "<Field Name='AuditRequired'>" + auditRequired + "</Field>" +
        "<Field Name='Application_x0028_s_x0029_'>" + applications + "</Field>" +
      "</Method>" +
  completefunc: function(xData, Status) {

This function attempts to check the document into the Document Library using the CheckInFile operation from the Lists Web Service.

// Check in a single document, disable all of the column controls and give a visual cue that it is checked in
function checkInDocument(obj, pageUrl) {
  var success = true;
    operation: "CheckInFile",
    async: false,
    pageUrl: pageUrl,
    comment: "Checked in during bulk upload",
    CheckinType: 1,
    completefunc: function (xData, Status) {
      $(xData.responseXML).find("errorstring").each(function() {
        alert($(this).text() + " Please save all of your changes before attempting to check in the document.");
        success = false;
  // If we couldn't check the document in, then don't disable the item's row
  if(!success) return success;
  // Disable the item and show it is checked in
  $(obj).closest("tr").each(function() {
    // Mark the item's row so that the user can see it is checked in
    $(this).attr("style", "background-color:#bee1aa");
    // Remove the Check In link
    $(this).prepend("<td class='actiondone'></td>");
    // Disable the Name column
    $(this).find("input:[Title='Name']").attr("disabled", "disabled");
    // Disable the RequestID column
    $(this).find("input:[Title='RequestID']").each(function() {
      $(this).attr("disabled", "disabled");
    // Disable the Artifact Type column
    $(this).find("input:[Title='ArtifactType']").each(function() {
      $(this).attr("disabled", "disabled");
    // Disable the AuditRequired column
    $(this).find("[id^='AuditRequired'] input").each(function() {
      $(this).attr("disabled", "disabled");
  return success;

As with any code you look at which you wrote a long time ago, I can see some things I’d change in this, but the net effect is perfectly fine. What happens is that the item is checked in, giving warnings if we don’t succeed, the row is colored green to indicate success, and the Save and Check In link is replaced with a nice little checkmark. 


In the screenshot above, you can also see what the hover effect looks like on the Save and Check In links. By applying just a little bit of CSS here, the links feel more like buttons. 

Finally, you’ll notice that there are two buttons at the bottom right of the page. 

Check All In

This button lets the user check all of the documents in at once. What it does is find all of the Save and Check In links in the first column and fire their click events.

// Check All In
function checkInDocuments() {
  var success = true;
  $().find("[id^='checkInLink']").each(function() {
    success = saveDocument($(this), $(this).attr("pageUrl"));
    success = checkInDocument($(this), $(this).attr("pageUrl"));
    if(success == false) return success;
  if(success) alert("All artifacts have been checked in.");

The Finished button lets the user say “I’m done here.” If they haven’t finished working with any of the documents, they will show on this page again the next time they get here. (There’s also a link on their management pages which lets them get back here directly, without uploading any new documents.) 


Well, better late than never, as I’m too fond of saying. In this series, you’ve seen how you can use some DVWPs, jQuery (and simple JavaScript), and the SharePoint Web Services to build a pretty slick and complex application. 

Since I built this application about a year ago, I’ve been able to go much further with these techniques. Lately, I’ve been using jQueryUI to add even more spiffiness to the user experience for stuff like this. If you want to use SharePoint to host real applications, develop them in the Middle Tier, and give them some real pizzazz, it’s something you might want to look into. 

And as for deployment (a perennial question), keep in mind that I did all of this with SharePoint Designer. Nothing whatsoever was deployed to the server. The deployment methodology to go from test to production is admittedly copy and paste, but if you know what you are doing, that works absolutely fine. The worst you can do is break one of the pages you’re deploying, which is easily fixed. 

These techniques also work really well in a cloud-based situation where you *can’t* deploy “real’ code to the server. 

I hope you’ve enjoyed the series, and sorry for the big lag times!



Have a thought or opinion?