jQuery Library for SharePoint Web Services (SPServices) 2014.02 Released

SPServicesJust in time for the holiday gift-giving season, I’m releasing SPServices 2014.02. This is the second release in 2014 (which you should be able to glean from the release name). If you’re using an earlier version of SPServices, I strongly recommend an upgrade. Read on.

The most important change in this release is due to an egregious error on my part that goes way back to 2013.01. Paul Tavares (@paul_tavares) spotted quite a while back and I was just too dumb to realize how important it was to fix. The net effect of my mistake was that SPServices was always caching requests, regardless how you set cacheXML. On pages where we simply called SPServices a few times on page load, it usually wouldn’t make much of a difference, but a little more memory might be required for the browser. However, in things like Single Page Applications (SPAs) which use SPServices for the data transport layer (See my blog post series: Single-Page Applications (SPAs) in SharePoint Using SPServices), it could make a *huge* difference. It effectively meant that for every unique request, we were caching data whether we should or not. Over a long page lifespan, that could add up to a lot of memory eaten up. Bad news. All better now.

I found out over the last few months the insidious ” Required Field” issue that I thought I had fixed completely in 2014.01 wasn’t totally fixed. For folks on SharePoint 2010 who applied last December’s CU, the markup in the DOM was a little different than the fix I had put into 2014.01. (See: Office 365 Update Changes ‘Display Name’ on Required Fields) This *should* be fixed for everyone now. Please let me know if you see any issues going forward. And Microsoft, please don’t do that again.

There is a handful of new operations in this release, added due to user requests:

  • WebPartPages.SaveWebPart2
  • RecordsRepository.GetRecordRouting
  • RecordsRepository.GetRecordRoutingCollection
  • RecordsRepository.GetServerInfo
  • RecordsRepository.SubmitFile

Finally, I recently moved my SPServices development repository over to Github at sympmarc/SPServices. I’m still getting my feet wet with Github (See: SPServices and Github – This Time I Mean It), but I plan to have my most current work available there going forward. I’ll still be posting the releases over on the Codeplex site, and I’ll monitor the issues there as well. I’m hoping that by using Github, we’ll have more people contributing to SPServices. It’s always been a community-driven project, but I’d love to get more direct contributions.

This release will be, as all of the recent releases have been, available on cdnjs as soon as possible. If you’d like to serve up SPServices from a CDN, cdnjs is the place to get it. cdnjs hosts a lot of JavaScript libraries that don’t have enough coverage to interest the big boy CDNs, so check it out in any case.

Finally, I want to thank Paul Tavares again for his help with SPServices over the years and with this release in particular. Both he and Josh McCarty (@joshmcrty) have also been trying to help get me over onto Github for a long time, but my thick skull just hasn’t gotten it before. I’m using Jetbrains Webstorm as my IDE these days and the Github integration there has finally made Github make sense to me. As we say here in Massachusetts, “light dawns on Marblehead“.

As usual, there are also a number of additional changes to fix existing bugs or improve efficiency (yes, I’m still able to improve on my old code, and I expect that will continue).

You can see the full list of enhancements and improvements on the download page. Note the link to the Issue Tracker items for this release. For posterity, here are links to the release notes showing all the fixes and improvements from the Issue Tracker on Codeplex.

New Functionality

Issue Tracker Item Function Description

 

New Operations

Web Service Operation Options MSDN Documentation Issue Tracker Item
WebPartPages DeleteWebPart pageUrl, storageKey, storage WebPartPagesWebService.DeleteWebPart Method 10273
WebPartPages SaveWebPart2 pageUrl, storageKey, webPartXml, storage, allowTypeChange WebPartPagesWebService.SaveWebPart2 Method 10273
ALPHA6 RecordsRepository GetRecordRouting recordRouting RecordsRepository.GetRecordRouting Method 10257
RecordsRepository GetRecordRoutingCollection NA RecordsRepository.GetRecordRoutingCollection Method 10257
RecordsRepository GetServerInfo NA RecordsRepository.GetServerInfo Method 10257
RecordsRepository SubmitFile fileToSubmit, properties, recordRouting, sourceUrl, userName RecordsRepository.GetRecordRouting Method 10257

 

Bug Fixes and Efficiency

Issue Tracker Item Function Description
10267 $().SPServices.SPComplexToSimpleDropdown Possible bug in SPComplexToSimpleDropdown
10253 $().SPServices.SPFindMMSPicker SPFindMMSPicker missing from 2014.01
10279 $().SPServices.SPGetListItemsJson SPGetListItemsJson Doesn’t Handle Attachments Correctly
10284 $().SPServices.SPFilterDropdown SPFilterDropdown Not filtering dropdown column.
10272 $().SPServices.SPGetCurrentSite $().SPServices.SPGetCurrentSite Documentation
ALPHA5 10277 $().SPServices.SPGetCurrentUser fix for SPGetCurrentUser at webUrl /
10248 $().SPServices.SPGetCurrentUser Trouble with SPSservices and Sharepoint Form
10265 $().SPServices.SPGetCurrentUser $().SPServices.SPGetCurrentUser isn’t returning ID
10254 $().SPServices.SPDisplayRelatedInfo SPDisplayRelatedInfo – DIV named improperly with Required field
10256 $().SPServices.SPCascadeDropdowns SPCascadeDropdowns Required Lookup issue
10262 $().SPServices.SPComplexToSimpleDropdown SPComplexToSimpleDropdown – required fields get renamed without the ‘Required Field’ appended to the field name
10146 $().SPServices ResolvePrincipals with addToUserInfoList=true requires SOAPAction
10271 $().SPServices.SPGetListItemsJson SPGetListItemsJson not using defaults.webURL
10283 $().SPServices.SPGetListItemsJson CAMLViewName not working?
ALPHA7 10182 $().SPServices async on version 2013.01 and caching
10298 $().SPServices GetTermSets argument names are slightly wrong
10299 $().SPServices WebUrl not working

SPServices and Github – This Time I Mean It

SPServicesGitHub_LogoI’ve had some false starts moving SPServices to Git and/or Github over the last few years. If it weren’t for Josh McCarty’s (@joshmcrty) help on every release, I wouldn’t even have gotten SPServices onto cdnjs, since they use Github. (Yes, SPServices is available via CDN at cdnjs and has been for several years now.) I’m just tremendously behind the times.

So it’s only taken me about two years, but I’m really biting the bullet on Github this time. I’ve just read through a bunch of great suggestions I got when I abortively tried to move things to Github long ago and I’m curious given the amount of time that has passed those folks might do differently.

Here are my assumptions/preferences:

  • Simple, simple, simple
  • I’m using a public folder in my Dropbox as my “CDN” for development. I think Paul Tavares (@paul_tavares) knows where it is, but no one else does. This Dropbox-based CDN helps immensely for testing, since I can just point my script references there in all of my test environments. In case anyone is wondering, I’d love to use OneDrive (either flavor) for this, but it just doesn’t work the way it does redirects. I can get a clean URL from Dropbox that just plain works.
  • I’m leaning toward WebStorm for my IDE these days. (Where I can’t install it in client environments, I’ll still use SharePoint Designer and/or Sublime Text.) Webstorm has very robust integration with Github that even seems to make sense to me. I’ve got my Webstorm project embedded in the Dropbox CDN I mentioned above.
  • In case you’re wondering, I do probably 99% of the work on SPServices, so my ideas for version control have been extremely simple to date. SPServices is a one-file project and I make virtually all the changes to it. SPServices wouldn’t be what it is without excellent help and contributions from people like Josh McCarty and Paul Tavares; I couldn’t have gotten to this point without them. But as far as the actual edits and testing, it’s mostly me.
  • I’ll continue hosting the docs and downloads on Codeplex, at least for the foreseeable future. This makes sense because of the volume of documentation and the great discussions history that’s already there.

What I’m looking for is best (better) practices, build ideas, etc. SPServices will continue to live as long as people find it useful, and I want to keep building it and supporting it. That said, it’s my “side project” – something I do for fun and learning opportunities. So any ideas should be labor *saving*, not labor *producing*.

It’s my hope that – as they mentioned in the suggestions I linked to above – more people may decide to contribute with the move to Github. Who knows, maybe we can haul it into RESTland along with SharePoint 2013.

Today I posted the latest beta for the 2014.02 release. I expect to make it a stable release in about a week or so, since quite a few people have been downloading it and testing it as I’ve made changes over the last few months. I’ll write more about why you *really* should upgrade to this new version in an upcoming post. (Thanks yet again to Paul Tavares on this one.)

Thanks in advance for any ideas you can toss into the mix. Feel free to reply in the comments here or on the older thread in the Codeplex Discussions.

 

Visual Studio Intellisense for SPServices

Be careful what you ask!

Well, no more than two hours later Daniel had sent me the stub of a vsdoc file for SPServices.

Now I don’t use Visual Studio, but I suppose if you like this kind of hand-holding, Intellisense could be useful. If you’re game, give it a try and let me know how it works out.

jquery.SPServices-2014.02 Intellisense BETA

If you have suggestions, additions, whatever, please send them along. If I get a general thumbs up, I’ll include the vsdoc file with the 2014.02 release, which I’m hoping to get out there in the next few weeks.

 

SPServices in the Office 365 Developer Podcast

Today Randy Drisgill (@drisgill) alerted me to the fact that Jeremy Thake (@jthake) and Steve Walker (@sharepointing) were talking about me in the Office 365 Developer Podcast: Episode 018 with Steve Walker on SharePoint UX developer guidance. While they do indeed say some very nice things about me (those payments to Redmond are working out), there’s actually a *lot* of great material covered in the podcast. If you just want to hear the part about SPServices, it starts around 28:00.

Podcast PlayerIt’s good to hear that Jeremy, Steve, and I think about some things the same way:

  • I’m awesome (just kidding)
  • Everyone loves SPServices, even Steve
  • “The SOAP services are deprecated, but unlikely to go away any time soon” (Jeremy)
  • “Deprecated doesn’t mean you can’t use it; it means [Microsoft] isn’t going to invest in the future and [they] will take it away at some point.” (Steve)
  • “Deprecated means that ‘no engineer is going to touch that’. It’s still running, it’s still in there, but we’re not going to be adding additional methods or functions.” (Jeremy)
  • Changes to the DOM can break SPServices and Microsoft *will* change the DOM on Office365 – expect it
  • Governance for script-based solutions is just as important as for anything else.
  • This stuff isn’t unique to SharePoint and Microsoft. We’d deal with a lot of these things with anyone else’s Web-based software, too.

As far as the deprecation of the SOAP Web Services goes, as far as I can tell, there hasn’t been *any* work done on them since SharePoint 2010 was released. That was way back on May 12, 2010, making it over 4 1/2 years since anyone has messed with them. Effectively, they have been deprecated for that long based on the definitions above. That latter part is one of the reasons the SOAP Web Services are so awesome: since no one messes with them, they are incredibly stable. Also, since they were developed a long time ago, they are pretty darn efficient. (We had to work harder to get more out of with less hardware capability back then.)

Just as with InfoPath, deprecated is not a death knell. With Infopath we *know* that there’s a long, long runway out to 2023 before support stops for it: “the InfoPath 2013 desktop client and InfoPath Forms Services for SharePoint Server 2013 will continue to be supported through 2023 as part of our Lifecycle support policy.” It will have a healthy life for quite a while after that, too, just like Windows XP still does. We don’t know what the runway looks like for the SOAP Web Services, though. That may mean we have more time, but it may mean we have less.

For quite a while now, I’ve advocated offering FaaS – or “Functions as a Service” – to the organization. By adding things like jQuery into the master page and letting citizen developers know about it, you immediately get some accountability and even can start some collaboration around that type of development. I wrote a whole chapter about these ideas last year in the book Black Magic Solutions for White Hat SharePoint. If you’re on premises now, but think you may be moving to Office365 in the future, the more you (an IT person) know about the citizen developer work that drives your business now, the better. The work that those citizen developers do is *so* important – they build what the organization truly needs and they are unsung heroes.

Some things I am not on the same page with Jeremy and Steve on:

“Don’t use our DOM as an API” – This would be fine if there were good ways to alter the DOM via a “real” API. Maybe that’s coming and maybe it isn’t. Until it does, we don’t really have much choice if we want to extend (or fix) things in places like the default list forms, which so many of the SPServices value-added functions provide.

The rules aren’t back and white. Much of the messaging from Microsoft centers around SharePoint Online and Office365, as if on premises installations don’t even exist. If you’re running things on premises, the same things won’t happen to you at such a rapid pace, the pace driven by the regular updates to Office365. You have control , so you can make different decisions. I know that the podcast is “Office365 Developer…” but many people think of messages from Redmond as applying to everything. It’s all nuances.

I’m probably more concerned about all the existing code out there that uses SPServices on top of the SOAP Web Services than the services themselves. I’ve been promising Scot Hiller (@scothillier) a series of articles on this for ITUnity for months, but I just can’t seem to get it done. Watch for some material from me about it, though. I want to help you move from SOAP to REST in an organized and productive way. You will have to do it sooner or later. Preparing for that eventuality will help to address the concerns that Steve and Jeremy expressed in the podcast.

And thanks again for the kind words and support, guys.

Caching SharePoint Data Locally with SPServices and HTML5’s Web Storage

The SharePoint SOAP Web Services are fast. In fact, I think they are as fast as the newer REST Web Services in many cases. The old, crufty SOAP Web Services even provide batching, something that the REST services don’t yet do. (Andrew Connell (@andrewconnell) has been beating the drum about this with Microsoft for months now, and we all hope they get this OData capability into SharePoint’s REST services sooner rather than later.)

Even though the SOAP services are fast, sometimes they just aren’t fast enough. In some of those cases, it may make sense to store some of your data in the browser’s Web storage so that it’s there on the client during a session or across sessions. Web storage is an HTML5 capability that is available in virtually all browsers these days, even Internet Explorer 8.

The best candidates for this type of storage (IMO) are list contents that are used as references and that don’t have a high number of changes. As an example, you might decide to store a list of countries in Web storage rather than loading them from the Countries list every time a page loads. Even though the read from the list is fast, it has to take *some* time. There’s the retrieval time, and then there is also any processing time on the client side. For instance, if you have dozens of fields per country and you need to load them into a complex JavaScript structure, that takes time, too. If those data chores are making your page loads seem laggy, then consider using local storage.

There are three main ways you can store data locally to improve performance. I’m not going to go into all of their intricacies, but I will give you some rules of thumb. There are a lot of nuances to this, so before you dive in, do some studying about how it all works.

Cookies

For small pieces of data, you should consider using cookies. Contrary to just about every article out there in the press, cookies are not bad. They can store up to 4k of data each for you, which you can read back when the user returns to the page. There’s a excellent little jQuery plugin I use to facilitate this called, aptly, jquery-cookie. You can download it (for free!) from GitHub here. Cookies persist across sessions.

Session Storage

Session storage is the flavor of Web storage that allows you to store data just for the duration of the session. Think of a session as a browser lifespan. Once you close the browser, the session storage is gone. Both session storage and local storage sizes are limited by the browser you are using. If you want to know if Web storage is available in your browser of choice, take a look at “Can I use“. The amount of storage each browser gives you is a moving target, but it’s per domain.

Local Storage

Local storage takes Web storage one step further. The data stored in local storage persists across browser sessions. In fact, it usually won’t go away until you explicitly delete it. (Consider this fact when you are gobbling up local storage in your development process.)

So What?

The trick with using these storage mechanisms is managing the data you’ve put in local storage as a cache. That data can go past its expiration date, either because some changes were made to the underlying data source or the cache itself has become corrupted. The latter is more difficult to deal with, so here I’ll focus on the former.

JavaScript – like most other programming languages – lends itself to building wrapper functions that add additional layers of abstraction on top of underlying functionality. Too many levels of abstraction can make things confusing, but with careful thought and smart code writing, you can build abstractions that serve you well.

In a recent client project, I found that as list data volumes were increasing, the pages in my SPServices- and KnockoutJS-driven application were loading more and more slowly. I’m building on top of SharePoint 2007 in this case, so even if I wanted to use REST, I couldn’t, nor do I believe that it would automatically make anything faster. If we had better servers running things, that might make a huge difference, but we have no control over that in the environment.

What I wanted was a reusable wrapper around SPGetListItemsJson (which itself is a wrapper around the SOAP List Web Service’s GetListItemChangesSinceToken and SPService’s SPXmlToJson) that would let me check local storage for a cached data source (list data), read either the entire data source or just the deltas from the SharePoint list, load the data into my application, and then update the cache appropriately.

The getDataSource function below is what I’ve come up with so far. There’s some setup to use it, so let me explain the parameters it takes:

  • ns – This is the namespace into which you want to load the data. In my applications these days, following the lead from the patterns Andrew and Scot Hillier (@scothillier) have published, I usually have a namespace defined that looks something like ProjectName.SubProjectName.DataSources. The “namespace” is simply a complex JavaScript object that contains most of my data and functions.
  • dataSourceName – The name that I want to give the specific data source within ns. In my example above with the Countries list I would use “Countries”.
  • params – This is the big magilla of the parameters. It contains all of the values that will make my call to SPGetListItemsJson work.
  • cacheItemName – This is the name of the item I want to store in Web storage. In the Countries example, I would use “ProjectName.SubProjectName.DataSources.Countries”.
  • storageType – Either “localStorage” or “sessionStorage”. If I expect the data to change regularly, I’d probably use sessionStorage (this gives me a clean data load for each session). If the data is highly static, I’d likely use localStorage.

And here’s the code:

/* Example:
getDataSource(ProjectName.SubProjectName.DataSources, "Countries", params: {
  webURL: "/",
  listName: "Countries",
  CAMLViewFields: "<ViewFields>" +
      "<FieldRef Name='ID'/>" +
      "<FieldRef Name='Title'/>" +
      "<FieldRef Name='Population'/>" +
      "<FieldRef Name='CapitalCity'/>" +
      "<FieldRef Name='Continent'/>" +
    "</ViewFields>",
  CAMLQuery: "<Query>" +
      "<OrderBy><FieldRef Name='ID'/></OrderBy>" +
    "</Query>",
  CAMLRowLimit: 0,
  changeToken: oldToken,
  mapping: {
      ows_ID:{"mappedName":"ID","objectType":"Counter"},
      ows_Title:{"mappedName":"Title","objectType":"Text"},
      ows_Population:{"mappedName":"Population","objectType":"Integer"},
      ows_CapitalCity:{"mappedName":"CapitalCity","objectType":"Text"},
      ows_Continent:{"mappedName":"Continent","objectType":"Lookup"},
    }
  }, "ProjectName.SubProjectName.DataSources.Countries"
)
*/

function getDataSource(ns, dataSourceName, params, cacheItemName, storageType) {

  var dataReady = $.Deferred();

  // Get the data from the cache if it's there
  ns[dataSourceName] = JSON.parse(window[storageType].getItem(cacheItemName)) || new DataSource();
  var oldToken = ns[dataSourceName].changeToken;
  params.changeToken = oldToken;

  // Read whatever we need from the dataSource
  var p = $().SPServices.SPGetListItemsJson(params);

  // Process the response
  p.done(function() {
    var updates = this.data;
    var deletedIds = this.deletedIds;
    var changeToken = this.changeToken;

    // Handle updates/new items
    if (oldToken !== "" && updates.length > 0) {
      for (var i = 0; i < updates.length; i++) {
        var thisIndex = ns[dataSourceName].data.binaryIndexOf(updates[i], "ID");
        // If the item is in the cache, replace it with the new data
        if (thisIndex > -1) {
          ns[dataSourceName].data[thisIndex] = updates[i];
          // Otherwise, add the new item to the cache
        } else {
          ns[dataSourceName].data.splice(-thisIndex, 0, updates[i]);
        }
      }
    } else if (oldToken === "") {
      ns[dataSourceName] = this;
    }
    // Handle deletes
    for (var i = 0; i < deletedIds.length; i++) {
      var thisIndex = ns[dataSourceName].data.binaryIndexOf({
        ID: deletedIds[i]
      }, "ID");
      ns[dataSourceName].data.splice(thisIndex, 1);
    }
    // Save the updated data back to the cache
    if (oldToken === "" || updates.length > 0 || deletedIds.length > 0) {
      // Save the new changeToken
      ns[dataSourceName].changeToken = changeToken;
      window[storageType].setItem(cacheItemName, JSON.stringify(ns[dataSourceName]));
    }
    dataReady.resolve();
  });
  return dataReady.promise();
}

Some of the nice things about this function:

  • It’s generic. I can call it for any list-based data source in SharePoint. (I started out building it for one data source and then generalized it.)
  • I call call it during a page life cycle to refresh the application data anytime I want or on a schedule, perhaps with setInterval.
  • I can set a lot of parameters to cover a lot of different use cases.
  • Each time I call it, it updates the cache (if it needs to) so that the next time I call it I get a “fresh” copy of the data.
  • It only loads the data that it needs to, by using the GetListItemChangesSinceToken capabilities.

And some downsides:

  • Since I know what data I’m working with in my application and that it will fit into the Web storage easily, I’m not worrying about failed saves.
  • If the cache does become corrupt (not something I expect, but there’s always Murphy), I’m not handling it at all.

If you decide to try this out, you’ll need a few auxiliary functions as well:

/* DataSource constructor */
function DataSource() {
  this.changeToken = "";
  this.mapping = {};
  this.data = [];
  this.deletedIds = [];
}

/** Adapted from http://oli.me.uk/2013/06/08/searching-javascript-arrays-with-a-binary-search/
 *
 * Performs a binary search on the host array.
 * @param {*} searchObj The object to search for within the array.
 * @param {*} searchElement The element in the object to compare. The objects in the array must be sorted by this element.
 * @return {Number} The index of the element. If the item is not found, the function returns a negative index where it should be inserted (if desired).
 */
Array.prototype.binaryIndexOf = function(searchObj, searchElement) {

  var minIndex = 0;
  var maxIndex = this.length - 1;
  var currentIndex;
  var currentElement;

  var searchValue = searchObj[searchElement];

  while (minIndex <= maxIndex) {
    currentIndex = (minIndex + maxIndex) / 2 | 0;
    currentElement = this[currentIndex];

    if (currentElement[searchElement] < searchValue) {
      minIndex = currentIndex + 1;
    } else if (currentElement[searchElement] > searchValue) {
      maxIndex = currentIndex - 1;
    } else {
      return currentIndex;
    }
  }

  return ~maxIndex;
}