Auth, Auth, Auth – The Bane of Our Development Existence

I was reading through one of Bob German’s (@Bob1German) recent (and great!) posts about Porting REST calls to SharePoint Framework and thinking about a conversation Bob, Julie (@jfj1997) and I had at our “salon”* when I was reminded of a very specific truth about our working days.

Image from New York City Department of Transportation on Flickr

I would venture to say that some ridiculously high percentage of all of our time developing with SharePoint and Office 365 comes down to figuring out why we can’t get something to work due to authorization or authentication issues. Some say this is a necessary evil to create enterprise software, but I say it’s an unnecessary problem. Unfortunately, documentation about auth stuff tends to be tremendously opaque and difficult to understand. In the same way that I expect the server to be there and I want nothing to do with it directly, I want auth stuff to just work – easily.

Sadly, this has never been the case with so-called “enterprise-class” software. On some level, I think the obtuseness of auth is there on purpose to keep the bar high enough to exclude lower end hackers. Unfortunately, we all get caught up in the kudzu of auth as a corollary effect.

Image from NatalieMaynor on Flickr

Ok, so that’s my editorial on auth. I wish it were easier, but it isn’t.

Recently in one of my Single Page Applications (SPAs) for a client, we kept getting weird failures in posting data to a list. Weird mainly in that I never saw the error on my end, my client is far away, and screen sharing is tricky due to the technical proficiency on that end. I’m not dissing anyone; they are great at what they do, but debugging JavaScript with me is not in their wheelhouse.

If you write software, you know that the worst bugs to squash are those that happen sporadically, and only on someone else’s machine – especially if you don’t have direct access to that machine. As usually, though, it was simply me doing something dumb with auth – which is not easy. Have I mentioned that?

Basically, the problem was that while I was fetching the request digest from the page (this is a “classic” page in SharePoint Online), I wasn’t ever refreshing the token. In this application, people move around from page to page enough and use the application for short enough time periods that we simply hadn’t seen the problem in testing.

Smart people will think “Marc’s an idiot” on this one, but I play the fool so you don’t have to.

The code comes from a service I use everywhere in my applications for this particular client. It’s basically a set of utility functions that are useful when you’re using Angular with SharePoint. I’ve built the SPA using AngularJS (1.x), so my code below represents that. However, similar logic can work with jQuery or whatever instead of AngularJS and $q. I adapted it from some Typescript code Julie was using, so credit there, with a bit of add on from me. I’ve added a bunch of comments and have also left some of the console logging I used in debugging – but commented out.

// Initialize with the token in the page in case there's a call that happens before we get a fresh token
self.requestDigest = document.getElementById("__REQUESTDIGEST").value; 

// This function makes a call to the contextinfo endpoint to fetch a new token
self.getRequestDigest = function (subsite) {

  var deferred = $q.defer();

  // This request must be a POST, but we don't need to send any data in the payload
  // i.e., it's a very simple call
  var request = {
    url: subsite + "/_api/contextinfo",
    method: "POST",
    data: '',
    headers: {
      'Content-Type': 'application/json',
      "Accept": "application/json;odata=nometadata"
    }
  };

  var success = function (response) {
    deferred.resolve(response.data)
  }

  var error = function (response) {
    deferred.reject(response.data);
  };

  $http(request).then(success, error);

  return deferred.promise;

}

// We call this function once when the page loads to set up the refresh looping
self.refreshRequestDigest = function (subsite) {

  // Set the default delay to 24 minutes in milliseconds
  var delayMilliseconds = 1440000;

  // Call the function above to get a new token
  self.getRequestDigest(subsite).then(function (contextInfo) {

    //        console.log(new Date() + " Old token:" + self.requestDigest);

    // Save the new token value in a variable available to all controllers using this service
    self.requestDigest = contextInfo.FormDigestValue;

    // Calculate the number of milliseconds which will be two minutes before the old token will expire
    delayMilliseconds = (contextInfo.FormDigestTimeoutSeconds * 1000) - 12000;

    //        console.log(new Date() + " New token:" + self.requestDigest);
    //        console.log("Waiting for " + delayMilliseconds + " ms");

    // Use setTimeout to set up the next token request
    setTimeout(function () {
      self.refreshRequestDigest(subsite);
    }, delayMilliseconds);

  });

};

// And here's the initial call to get things rolling. We need a token for the current site
self.refreshRequestDigest(_spPageContextInfo.webAbsoluteUrl);

The data we get back looks something like this:

{
  "FormDigestTimeoutSeconds": 1800,
  "FormDigestValue": "0xD22256967BAA39D5860CF0E812BE4AD0E018603851720D0E57581E1D1EBE8394865FF07EA3905DA1FE21D19D25BD6369B9D4F0F47725FBBB5AA7B3DE4EB54BAA,28 Jun 2017 15:21:52 -0000",
  "LibraryVersion": "16.0.6621.1204",
  "SiteFullUrl": "https://my.sharepoint.com/",
  "SupportedSchemaVersions": [
    "14.0.0.0",
    "15.0.0.0"
  ],
  "WebFullUrl": "https://my.sharepoint.com/mySiteName"
}

The thing we care about most is the FormDigestValue , as that’s what we use to make our POSTs to SharePoint while it’s valid. But also note that there is an attribute in the returned JSON for FormDigestTimeoutSeconds. That’s the number of seconds that this particular token will be valid. In every tenant where I’m using the code, that works out to be 30 minutes. However, there may well be a setting which can change that, or Microsoft may change the time span on us. Because of this, I use the value to calculate how often to request a new token: T-2 minutes.

I’m pretty sure that there are pages in SharePoint which don’t do this correctly, so I’m not feeling like too much of an idiot. For example, when we click on an “app” – usually a list or library to you and me – in Site Contents, it opens in a new browser tab. Very often when I go back to that Site Contents page – when it has been sitting there for a while – I’ll get an unauthorized error. This may be fixed by now, though it has happened to me a lot.

I hope this is helpful. Using functions like this, we can make the whole auth thing easier on ourselves – and there zero reason to need to write this code from scratch every time. Store int in one place and if anything changes, you’ll have one place to fix it later.


  • We have occasional “software salons” – usually in one of our homes – where we simply get together to kick around what’s going on in our work lives, interesting things we’ve solved, etc. It’s a tremendously educational and useful exercise. I would encourage you to do something similarly informal if you don’t have enough water cooler time. At Sympraxis, we have no water cooler.
Advertisements

Yes, Virginia, You Can Get More than 5000 SharePoint Items with REST

If you haven’t been paying any attention, you might not know that I loathe the 5000 item limit in SharePoint. I may have mentioned it here and here and here and a bunch of other places, but I’m not sure. But given it’s there, we often need to work around it.

No 5000 item limit

I’ve written this little function before, but alas today I needed it again but couldn’t find any previous incarnations. That’s what this blog is for, though: to remind me of what I’ve already done!

In this case, I’m working with AngularJS 1.x. We have several lists that are nearing the 5000 item level, and I don’t want my code to start failing when we get there. We can get as many items as we want if we make repeated calls to the same REST endpoint, watching for the __next link in the results. That __next link tells us that we haven’t gotten all of the items yet, and provides us with the link to get the next batch, based on how we’ve set top in the request.

Here’s an example. Suppose I want to get all the items from a list which contains all of the ZIP codes in the USA. I just checked, and that’s 27782 items. That’s definitely enough to make SharePoint angry at us, what with that 5000 item limit and all. Let’s not get into an argument about whether I need them all or not. Let’s just agree that I do. It’s an example, after all.

Well, if we set up our requests right, and use my handy-dandy recursive function below, we can get all of the items. First, let’s look at the setup. It should look pretty similar to anything you’ve done in an AngularJS service. I set up the request, and the success and error handlers just like I always do. Note I’m asking for the top 5000 items, using "&$top=5000" in my REST call.

self.getZIPCodes = function () {

  var deferred = $q.defer();

  var request = {
    method: 'GET',
    url: _spPageContextInfo.webAbsoluteUrl +
    "/_api/web/lists/getbytitle('ZIP Codes')/items?" +
    "$select=ID,Title" +
    "&$top=5000",
    headers: {
      "Accept": "application/json; odata=nometadata"
    }
  };
  var success = function (response) {

    angular.forEach(response.value, function (obj, index) {

      self.zipCodes.push({
        ZIPCode: obj.Title
      })

    });
    deferred.resolve(self.zipCodes);
  };

  var error = function (response) {
    deferred.reject(response.data);
  };

// This is the "normal" call, which would get us up to 5000 items
// $http(request).then(success, error);

// This gets us all the items, no matter how many there are.
  self.getAllItems(request).then(success, error);

  return deferred.promise;

};

If there are fewer than 5000 items, then we don’t have a problem; the base request would return them all. Line 32 is what would do that “normal” call. Instead, I call my recursive function below, passing in the request only, even though the function can take two more arguments: results and deferred.

// Recursively get all the items in a list, even more than 5000!
self.getAllItems = function(request, results, deferred) {

  // The first time through, these three variables won't exist, so we create them. On subsequent calls, the variables will already exist.
  var deferred = deferred || $q.defer();
  var results = results || [];
  results.data = results.data || [];

  // Make the call to the REST endpoint using Angular's $http
  $http(request).then(function(response) {

    // The first time through, we don't have any data, so we create the data object with the results
    if (!results.data.d) {
      results.data = response.data;
    } else {
      // If we already have some results from a previous call, we concatenate this set onto the existing array
      results.data.d.results = results.data.d.results.concat(response.data.d.results);
    }

    // If there's more data to fetch, there will be a URL in the __next object; if there isn't, the __next object will not be present
    if (response.data.d.__next) {
      // When we have a next page, we call this function again (recursively).
      // We change the url to the value of __next and pass in the current results and the deferred object we created on the first pass through
      request.url = response.data.d.__next;
      self.getAllItems(request, results, deferred);
    } else {
      // If there is no value for __next, we're all done because we have all the data already, so we resolve the promise with the results.
      deferred.resolve(results);
    }

  });

  // Return the deferred object's promise to the calling code
  return deferred.promise;

};

The recursive function simply keeps calling itself whenever it sees that the __next attribute of the response is present, signifying there is more data to fetch. It concatenates the data into a single array as it goes. In my example, there would be 6 calls to the REST endpoint because there are 27782 / 5000 = 5.5564 “chunks” of items.

Image from https://s-media-cache-ak0.pinimg.com/564x/16/bb/03/16bb034cb3b6b0bdc66d81f47a95a59f.jpg

Image from https://www.pinterest.com/pin/323062973239383839/

NOW, before a bunch of people get all angry at me about this, the Stan Lee rule applies here. If you have tens of thousands of items and you decide to load them all, don’t blame me if it takes a while. All those request can take a lot of time. This also isn’t just a get of of jail free card. I’m posilutely certain that if we misuse this all over the place, the data police at Microsoft will shut us down.

In actual fact, the multiple calls will be separated by short periods of time to us, which are almost eternities to today’s high-powered servers. In some cases, you might even find that batches of fewer than 5000 items may be *faster* for you.

In any case, don’t just do this willy-nilly. Also understand that my approach here isn’t as great at handling errors as the base case. Feel free to improve on it and post your improvements in the comments!

How Can I Customize List Forms in SharePoint Online?

The other day, a client asked me a pretty classic set of questions about customizing SharePoint Online list forms. As with some other arenas of endeavor in Office 365, there is more than one way of doing things and it can be confusing.

I am looking at options to customize List forms on sites in SPO and I am trying not to have to deal with code.

My first choice has been InfoPath Designer but I know this is being deprecated and it seems like some of my sites are not allowing the use of InfoPath to customize my forms. [This was an internal issue, not an InfoPath issue.]

I know I could add web parts to the form pages and use JavaScript / jQuery or I could try and edit in [SharePoint] Designer but without the design view I am hesitant to mess around there too much.

Do you have any other tools you recommend for customizing List Forms?

Here’s an expanded version of the answer I gave my client, which incorporates some additional ideas and feedback I gleaned from Dan Holme (@DanHolme) and Chris McNulty (@cmcnulty2000) at Microsoft. (It’s nice being in Redmond for MVP Summit this week, where I can easily catch these guys in the hallway!)


The answer is the classic “it depends”. The main thing it depends on is what type of customization you actually want to do. There are a number of approaches and each has its pros and cons.

Adding JavaScript to the out-of-the-box forms

This is still possible, but I would discourage it in many cases, even though this has been my bread and butter approach for years. The out-of-the-box forms are changing, and “script injection” to do DOM manipulation is less safe. Remember you’re working on a service now, and it can change anytime.

SPServices

Unfortunately, this means that getting things like cascading dropdowns into forms is becoming harder than it used to be with SPServices. It’s not that you shouldn’t use that approach, but it does mean that the clock is ticking on how long it will continue to work. At lthis point. I’m recommending building entirely bespoke custom forms rather than adding JavaScript to the existing forms, though I still do plenty of the latter.

InfoPath

InfoPath

Yes, it’s deprecated or retired or whatever, but Microsoft will support InfoPath through 2026 at this point. InfoPath is often overkill – you can do a lot with list settings – but at the same time, it can still be useful. Keep in mind that using InfoPath means you’ll need to stick with the classic view for the list or library.

PowerApps + Flow

PowerAppsFlow

These new kids on the block are the successors to InfoPath, as announced at Microsoft Ignite. They can do a lot, but they may or may not meet your needs at this point. They did reach GA recently.

PowerApps would be how you would build the forms themselves and with Flow you can add automation – what we’ve called workflow in the past.

PowerApps embedding is “coming soon”. This will allow us to embed PowerApps into the list form context, as shown in the screenshot below. This will be a GREAT thing for a lot of list scenarios. At that point, I think the need for InfoPath will be greatly diminished.

PowerApps Embed

SharePoint Framework (SPFx)

SharePoint Framework (SPFx)

The SharePoint Framework is the next gen development model for things like custom Web Parts, which will run client side. We can build pretty much anything you want with this, but it’s still in preview. At some point in the not-too-distant future, I think we’ll be building a lot of custom forms using SPFx.

Fully custom forms

AngularJS

KnockoutJS Logo

To create fully custom forms today, you might use development frameworks like AngularJS or KnockoutJS (to name only a few). This can be the right answer, with the goal being to build in a way that can eventually merge into SPFx and the new “modern” pages. Knowing a bit about SPFx is a *very* good idea if you are going to go this route today. You’ll want to build your custom forms in such a way that you aren’t locking yourself out of wrapping them up into the SPFX for improved packaging and deployment down the road.

Third party tools

Because of how I roll, I haven’t used any of the third party tools out there, but there are many. The ones I hear come up most often are Nintex Forms, K2 Appit, and Stratus Forms. Obviously, there’s a economic investment with these choices, as well as a learning curve, so it’s always “your mileage may vary”.

Nintex Forms K2

Stratus Forms


The landscape here continues to change, so keep your eyes and ears open for more news from Microsoft and bloggers like me in the future!

Uploading Attachments to SharePoint Lists Using REST

In a previous post, I walked through Uploading Attachments to SharePoint Lists Using SPServices. Well, in the “modern” world, I want to use REST whenever I can, especially with SharePoint Online.

Add an attachmentI ran into a challenge figuring out how to make an attachment upload work in an AngularJS project. There are dozens of blog posts and articles out there about uploading files to SharePoint, and some of them even mention attachments. But I found that not a single one of the worked for me. It was driving me nuts. I could upload a file, but it was corrupt when it got there. Images didn’t look like images, Excel couldn’t open XLSX files, etc.

I reached out to Julie Turner (@jfj1997) and asked for help, but as is often the case, when you’re not as immersed in something it’s tough to get the context right. She gave me some excellent pointers, but I ended up traversing some of the same unfruitful ground with her.

Finally I decided to break things down into the simplest example possible, much like I did in my earlier post with SOAP. I created a test page called UploadTEST with a Content Editor Web Part pointing to UploadText.html, below:

<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.3/jquery.js"></script>

<input id="my-attachments" type="file" fileread="run.AttachmentData" fileinfo="run.AttachmentInfo" />

<script type="text/javascript" src="/sites/MySiteCollection/_catalogs/masterpage/_MyProject/js/UploadTEST.js"></script>

Like I said, I wanted to keep it simple. Here’s what I’m doing:

  • In line 1, I load a recent version of jQuery from cdnjs. On the theory that Angular just adds another layer of complexity, I wanted to try just jQuery, which is useful for its $.ajax function, among many other things.
  • In line 3, I’ve got an input field with type=”file”. In “modern” browsers, this gives us a familiar file picker.
  • In line 5, I’m loading my script file called UploadTest.js, below:
$(document).ready(function() {

 var ID = 1;
 var listname = "UploadTEST";

 $("#my-attachments").change(function() {

  var file = $(this)[0].files[0];

  var getFileBuffer = function(file) {

   var deferred = $.Deferred();
   var reader = new FileReader();

   reader.onload = function(e) {
    deferred.resolve(e.target.result);
   }

   reader.onerror = function(e) {
    deferred.reject(e.target.error);
   }

   reader.readAsArrayBuffer(file);

   return deferred.promise();
  };

  getFileBuffer(file).then(function(buffer) {

   $.ajax({
    url: _spPageContextInfo.webAbsoluteUrl +
     "/_api/web/lists/getbytitle('" + listname + "')/items(" + ID + ")/AttachmentFiles/add(FileName='" + file.name + "')",
    method: 'POST',
    data: buffer,
    processData: false,
    headers: {
     "Accept": "application/json; odata=verbose",
     "content-type": "application/json; odata=verbose",
     "X-RequestDigest": document.getElementById("__REQUESTDIGEST").value,
     "content-length": buffer.byteLength
    }
   });

  });

 });
});

Here’s how this works – and yes, it does work!

  • I’ve got everything wrapped in a $(document).ready() to ensure the page is fully loaded before my script runs.
  • Lines 3-4 just set up some variables I could use for testing to keep things simple.
  • In line 6, I bind to the change event for the input field. Whenever the user chooses a file in the dialog, this event will fire.
  • In line 8, I’m getting the information about the file selected from the input element.
  • Lines 10-26 is the same getFileBuffer function I used with SOAP; it’s where we use a FileReader to get the contents of the selected file into a buffer.
  • The function in lines 28-42 runs when the file contents have been fully read. The getFileBuffer function returns a promise, and that promise is resolved when the function has gotten all of the file contents. With a large file, this could take a little while, and by using a promise we avoid getting ahead of ourselves. Here we make the REST call that uploads the attachment.
    • The URL ends up looking something like: “/my/site/path/_api/web/lists/getbytitle(‘UploadTEST’)/items(1)/AttachmentFiles/add(FileName=’boo.txt’)”
    • The method is a POST because we’re writing data to SharePoint
    • The data parameter contains the “payload”, which is the buffer returned from the getFileBuffer function.
    • The headers basically tell the server what we’re up to:
      • Accept doesn’t really come into play here unless we get a result back, but it says “send me back json and be verbose”
      • content-type is similar, but tells the server what it is getting from us
      • X-RequestDigest is the magic string from the page that tells the server we are who it thinks we are
      • content-length tells the server how many bytes it should expect

This worked for me on a basic Custom List (UploadTEST). So now I knew it was possible to upload an attachment using REST.

I took my logic and shoved it back into my AngularJS page, and it still worked! However, when I switched from $.ajax to AngularJS’s $http, I was back to corrupt files again. I’m still not sure why that is the case, but since I’m loading jQuery for some other things, anyway, I’ve decided to stick with $.ajax for now instead. If anyone reading this has an idea why $http would cause a problem, I’d love to hear about it.

Once again, I’m hoping my beating my head against this saves some folks some time. Ideally there would be actual documentation for the REST endpoints that would explain how to do things. Unfortunately, if there is for this, I can’t find it. I’d also love to know if /add allows any other parameters, particularly whether I can overwrite the attachment if one of the same name is already there. Maybe another day.

From the SPServices Discussions: Why Should I Bother to Learn XSLT?

Client Server DiagramHere’s a question from gdavis321 from the good old SPServices discussions. In most cases, I just answer the questions in situ, but occasionally – as you frequent readers know – I realize what I’m writing feels more like a blog post, so I move it over here.

It seems that Spservices (thank you for this god send) can do anything that you might want to do with a content query webpart or other webpart… am I missing something? Why should I bother to learn XSLT?

It’s a fair question. As someone who loves using the Data View Web Part (DVWP) – which is XSL-based – the answer is, as always, it depends.

Doing everything client side isn’t always a great idea. For each specific requirement, you should look at whether it makes more sense to do the heavy processing on the server or client. There are many variables which go into this, like client machine capabilities, data volume, number of data sources, need to join data from different sources, skills on the development team, desired time to market, etc.

In SharePoint 2013, much of the processing has moved to the client side (often called Client Side Rendering, or CSR), so many people think that we should just process everything client side now. I worry about this sort of myopia.

Client machines – meaning the machine sitting on your desk or in your lap or hand rather than the server, which sits in a data center somewhere – have better and better capabilities. Yet in many organizations, you may still see Windows XP machines with 1Gb of RAM – or worse!

Additionally, client side processing may mean moving a huge volume of data from the server to the client in order to roll it up, for example. If you’re moving thousands of rows (items) to make one calculation, it *can* feel sluggish on client machine without a lot of oomph.

So client side libraries like SPServices or KnockoutJS or AngularJS (among many others) are fantastic for some things, not so fantastic for others.

When it comes to the JavaScript versus XSL question, both approaches should be part of your toolset. Sometimes good old XSL will be an excellent call if you can do some of the heavy lifting on the server, then pass the results to the client for further processing. The output need not be just nicely formatted list views, but could be sent in such a format as to make the client side processing easy to start up upon page load. My approach is often to ship the data to the browser via a DVWP and then process it further with jQuery on the client.

The mix of what happens where is on a spectrum, too, depending on what you need to do. You might roll up tens of thousands of items in a DVWP and then request items from two or three other lists from the client as the page loads to optimize your architecture.

Yup, there’s that architecture word. A good SharePoint architect will know when to use each of the tools in the toolkit, and JavaScript and XSL are just two of many.

Keep in mind that because “it depends”, your mileage may vary in any and all of this: cavete architectus.