Sympraxis’ SharePoint Client Side Development Pipeline – Where Should We Put Our Stuff?

This entry is part 2 of 2 in the series Sympraxis Client Side Development Pipeline

As I mentioned in my previous post, when Julie Turner (@jfj1997) joined Sympraxis, we quickly realized we needed to get smarter about managing our code and our overall development process.

One of the first things Julie and I discussed was where we should store our code. Even way back before SharePoint entered my life, I’ve thought of code as content. It’s just a different type of content that has different content management requirements. For this reason, I’ve never fretted too much about storing my code in a SharePoint Document Library or in the Master Page Gallery.

Code as Content

That works fine when you’re a development team of one. But as Julie and I started working on a project together, we knew we needed to do better. Plus, angst and all that. One thing that is key here – and is key in many conversations about stuff like this – is there is no one-size-fits-all answer. Managing code is like managing any other content in that there should be some governance around it. But the governance doesn’t have to be – nor should it be – the same in every instance.

So we thought about what we wanted to be able to accomplish. Basically, what our requirements were to get rolling.

  • We needed an offsite (meaning not just on our machines) repository. This would make us worry less about disaster recovery. We were both doing backups to the cloud (me with Crashplan and Julie with Acronis), but we wanted a repository that belonged to the company, not to either of us personally.
  • We wanted to improve our code reuse. It’s not that we build the same thing over and over, but like the functionality in SPServices, there are some things we do fairly often. In other words, our tricks of the trade. By storing all of our code in one place, we hoped we would make reuse easier.
  • We work with clients on project-based work. Sometimes we work with a client for a while, then they take over for a while, and we reengage with them when a new need arises. We wanted to make it easier for ourselves when that re-engagement happened: basically improve our speed-to-useful again.

As Julie mentioned in her post, she had used Team Foundation Server (TFS) Online at a previous job. I had touched TFS at a previous gig, but like many Microsoft tools it seemed way overblown for our needs. Plus, it’s really tuned for Visual Studio, which I never use.

Julie decided to set up a private Github repository, figuring it would be more palatable to me. I’m always a fan of using things that are simple (Github confused me for years, so I’m not sure it qualifies as simple!) and I liked the fact that we would be using something the wider tech community had stamped with a seal of approval.

GitHub_Logo

We went with an Organization Bronze Plan – which now seems to be obsolete. (Here’s a link to the old plans courtesy of the Internet Archive Wayback Machine.) This gave us up to 10 repos and unlimited users for USD$25/month. Not much money, really, and we figured 10 repos was plenty.

Once we had the repo, we thought about how to organize it. We started with three repos for our organization: clients, admin, and samples. Our thought was that we would put all of the client work artifacts into that one clients repo. It certainly addressed our requirements to work that way. The admin and samples repos would be for Sympraxis administrative stuff and demos or samples we used for speaking sessions, respectively.

This got us up and running. We were both storing our code in a central repository and we could use whatever code editor (or IDE, depending on what terminology you find acceptable) we wanted. I’m using WebStorm a lot these days, but I also use Sublime Text and SharePoint Designer, and… Yeah, whatever works. Julie came from using Visual Studio, but these days she’s mainly using Visual Studio Code.

One of the greatest things about this Brave New World is that what we use to edit code really doesn’t matter that much. I really started liking WebStorm when I realized that its tooling for Github actually made Github make sense to me. I love the ecosystem of plugins for Sublime Text. And no IDE understands SharePoint as well as SharePoint Designer does. So I get to use whatever I need for the task at hand, and so does Julie. We’re just putting the results of that work into the same place.

As we moved forward with this setup, we started to see a few flaws in our thinking. The great thing about being a learning organization (the learning group is YUGE at Sympraxis) is that we comfortably revisit our decisions whenever it makes sense. The clients repo quickly became unwieldy. (Some of you would say “Duh!” here, but we’re fine with the way this went.) We were still manually copying code into our clients’ environments or editing in place and taking copies to dump into Github. We were very happy with Github, but not the mechanics of how we were using it.  So all wasn’t rosy yet.

In my next post, I’ll talk about the next steps we took to tune things…

 

Sympraxis’ SharePoint Client Side Development Pipeline – Introduction

This entry is part 1 of 2 in the series Sympraxis Client Side Development Pipeline

As promised in our August Sympraxis Newsletter, Julie and I wanted to start a new blog series to explain our new SharePoint client side development pipeline. We’ll each put our own spin on the idea, as our ways of thinking, backgrounds, and blog focuses are a bit different (but totally compatible!), so be sure to check Julie’s take as well.

We worked a lot in July and early August to improve our development practices in general. Julie joined Sympraxis with some great ideas, and she’s managed to teach this old dog quite a few new tricks. We’ve got all of our current client projects up on private Github repositories and we’ve got a really nice workflow going now that both improves our efficiency and gives us far better disaster recovery capabilities.

GitHub_Logo

This isn’t the pipeline we’ll use when the new SharePoint Framework (SPFx) is released in Preview (hopefully soon!), but it borrows from some of the tech used in that process. If you’re still doing client side development that you plant in SharePoint pages with the Content Editor Web Part (CEWP) or the Script Editor Web Part (SEWP), then this pipeline process will prepare you for some of the things you’ll need to understand when you get to SPFx.

We edit our client side code on our devices (laptops, tablets, even phones!) in local copies of the repos, use gulp tasks with spsave to push changes to the Office 365 or on premises tenant, and then commit blocks of changes to the Github repos.

Gulp logo

We’re even using different IDEs to work on our code. Right now my favorite is WebStorm, whereas Julie prefers Visual Studio Code. Because the tooling we used all sits on top of nodejs – which is truly cross platform, cross IDE, cross everything – it doesn’t really matter that we’re using different IDEs.

Webstorm logo

If you’ve been using source control for a long time, this may seem like old hat to you. But for many client side devs (like this old goat), there haven’t been very good ways to do it effectively. The old “map a local drive and edit the JavaScript/HTML/CSS in place” method has been good enough for years. But with the newer Document Library “experiences” on Office 365, the trusty Open with File Explorer is starting to work unreliably. Besides, it was time to get with the program.

For years now, my “pipeline” has been to map a drive to my code repository in the client tenant or installation, edit the files in place, and all was good. I’ve gotten in the habit of storing my code artifacts in one of two places:

  • ScriptsCSS – This was my practice for quite a few years, and is simply a Document library – usually in the root of the Site Collection – where I put things. Because it’s a Document Library, I can turn on versioning if I choose, restrict access for write (though everyone needs read permissions to access the code), etc. It’s also in “plain sight” for the people I work with at the client. This is usually a *good* thing, but more recently I switched to using…
  • /_catalogs/masterpage/_ClientName – This is a bit more out of the way, as it means putting my code in the Master Page Gallery. If permissions are set correctly, this means that few people can accidentally wander into it, and everyone has read access by default.

Either of these locations work and have worked for me regardless the version of SharePoint, whether it be 2007, 2010, 2013, 2016, or any flavor of SharePoint Online. My original goal with all of this was to avoid deploying ANYTHING to the server. It just so happened that this goal ended up meshing with where Microsoft has come over the years.

My reversion technique was simply Ctrl-Z (undo) in my editor of choice. If I had versioning on in the Document Library, then the version history was available for me, too. As for IDEs, sometimes I use SharePoint Designer, sometimes Sublime Text, sometimes WebStorm.

One of my other approaches to “working in production” (a collective gasp goes up from most people) was to create copies of the “prod” files I was working with – maybe something like HomePageNEW.ASPX, HomePageNEW.html, HomePageNEW.js, etc. This gives me a way to work on the next version of everything while still using the same content base. In cases where there are content updates, I may even have a whole shadow set of lists and libraries for testing. Having them in the same location makes it easy to copy content back and forth – usually using Sharegate, of course! I still do this, but now with better source control I have a better record of whence I’ve come.

None of these practices are really source control. Sure, I would ZIP everything up once in a while and store it away, but I couldn’t get back to a specific point in time. My approaches *worked* but weren’t as robust as I would have liked.

In the rest of the series, Julie and I will explain how we arrived at our new approaches in our posts, how they work – in detail, with code! – and what we get out of it. If you have specific questions, please feel free to add them in the comments and we’ll attempt to cover them. And don’t forget to read Julie’s spin on it, too!

 

Software Development Literacy – Wave of the Future or Doomsday Device?

A few months ago, I read a newspaper article – which unfortunately I can’t find – about the idea that software development literacy may someday seem as normal as reading literacy is today. I didn’t think it was far-fetched at all. In today’s world *everyone* touches a computer in some way, even if it’s only the chip that runs the fare collector on public transportation. (This isn’t a discussion about rich and poor – I tried to come up with the most benign example I could. Admittedly, it’s more a first world example.)

Today there was an article in the Boston Globe about a company called FreeCause here in Boston that is doing something unique. The story explained that…

…29-year-old company chief executive Michael Jaconi told all 60 of his employees that they had to learn the programming language JavaScript. The idea is not to turn everyone into an engineer, but to give employees — from accountants to designers to salespeople — a better understanding of what goes into developing the company’s software.

Jaconi’s initiative is a recognition that technology has inserted itself into almost every aspect of modern life, and it’s a subject people increasingly need to know. In many companies, technology often creates barriers that separate technical from nontechnical workers.   “There’s a pretty big divide between engineers and nonengineers, and what I wanted to do was bring those two camps closer together,” said Jaconi, a serial entrepreneur and former political campaign worker who is learning to code along with his employees. “I thought that this would facilitate more efficiency, bring teams closer together, and ultimately make our company perform better.”

Oddly, unless I’m really out of it, there’s a bug in the example the article showed in one of the accompanying pictures. Bonus points if you spot it.

Learning JavaScript

Image from the Boston Globe Web site

I tweeted a link. (Through the wonders of HootSuite – the awesome social media tool I prefer over all the others – I also posted it to Facebook and LinkedIn at the same time.)

The fastest response I got on Twitter was from my friend Dan Antion (@DAntion):

I expected I’d hear something similar from a good number of the developers who follow me on Twitter, and eventually I did hear from quite a few with what amounted to disparaging comments about the idea. At best they were, like Dan’s, a sort of “uh-oh”.

I think it’s more complex than that initial reaction and also more important. Let me explain my thoughts.

As a consultant, I am paid to be an expert in some things. What many of my clients don’t realize, though, is that because I don’t specialize in any particular industry and I’ve been in consulting a very long time, I also have to know at least something about a lot of things: car manufacturing, stock trading, theme parks, higher education, pharmaceutical discovery, and the list goes on. (Those are all examples of real projects I’ve worked on over the years.) I have enough humility to know that I’m not an expert in fields out of my chosen one, but I have to know *something* about others in order to advise in a useful way and to write useful solutions.

Think about your major in college. Do you “do” that thing now as your everyday activity? I majored in Mathematics, and it’s pretty rare that I “do” math. I studied all kinds of things in college: psychology, chemistry, film making, rocks for jocks [geology], etc. I don’t “do” any of those things on a daily or even yearly basis. But I’ll argue with anyone who says that a liberal arts education – wherein one studies a wide range of things – doesn’t add up to a well-rounded, multi-talented individual. (Full disclosure: my major was actually called “Computer Mathematics”. The last time I came up with an interesting, computer-based  way to factor primes was in college, though.)

Another thing I’ve seen over my years of consulting is that, generally speaking, the teams that I’ve seen be most effective share some traits. They are usually cross-functional, highly motivated, and inquisitive about each other’s knowledge. I’d take a team with those traits over specific, homogeneous knowledge any day. Note that I mentioned “inquisitive about each other’s knowledge”. That means that they want to learn a little something about what the others know. This helps them to work together more effectively.

As software development becomes more and more pervasive, what’s wrong with everyone having basic literacy in it?

We might be able to interact with technical customer support better. We may be able to understand what to do or not do to avoid infecting our computers with viruses. We may be able to save unending time by not doing things that cause our work to be lost, requiring us to recreate it. We might understand what we’re asking each other for just a little bit better, making us more able to collaborate on the important parts of the task at hand rather than level setting every time.

Simple programming knowledge (I almost said “basic programming knowledge”, but that would be too specific) is an excellent idea. To apply knowledge management principles to “using a computer”, if we can identify what the key things the high performers know that make them good at it and can teach the low performers just a scintilla of that knowledge, everyone’s competency rises. By knowing something about what’s going on under the hood, I posit we all become better digital denizens.

Also note that nothing in the article said that the accountant or the salesperson has to become a software developer. They just have to learn the basics – enough for “every FreeCause employee develop a product such as a Web page or toolbar component that could potentially be integrated into the company’s loyalty rewards software.” That’s potentially. Not definitely, and not absolutely.

I’m going to go with Jaconi’s idea as a wave of the future, and one I welcome. There’s plenty of other stuff to worry about in the doomsday category, and this isn’t one of them.

Enhanced by Zemanta

SharePoint Saturday Tampa Wrap-Up

I thoroughly enjoyed speaking at SharePoint Saturday Tampa yesterday. Michael Hinckley and his team (Is there a team, really, or is it all Michael??? I know his daughter was there helping at the Speaker Dinner.) did another crackerjack job with the event.

It was great to see old friends like Michael Oryszak (@next_connect), Michael Greene (@webdes03), USPJA student Anita Webb (@awebb55), and the godfather of all that is SharePoint Saturday, Michael Lotter. Yes, it was a very “Michael” kind of event. It was also great see some other familiar faces and meet some new folks, most of whom were not named Michael.

My session on Developing in SharePoint’s Middle Tier was well-attended by an energetic and interested group. I always like to have a lot of questions and good discussion and I was happy to have it go that way again this time.

The few slides I used as an introduction for the session are available here. If the propeller head joke offended anyone, I apologize. Long live the propeller heads.

I’m working on making sure that the demos can be instantiated on Office365 as well as “on premises” SharePoint, so I’ve posted new WSPs to my Sympraxis Consulting Demos site. If you happen to try  them with Office365, please let me know how it goes. While these techniques will absolutely work with Office365, I haven’t been able to get the WSPs to transfer successfully because I’ve built them in an “on premises” VM.

“Middle Tier (Customized Navigation)”

image6

“Budget”

image7

Note that there is also a SharePoint 2007 version of the “Customized Navigation” demo, which is a few revs back, but not too far from what I demonstrated.

If you attended my session, thanks for coming and let me know if you have any questions about the demos.

Choosing the Right Development Tools for Your Organization

At one of my clients, there’s a debate going on about whether to control which development tools people use or not. In my mind, that’s a no-brainer; absolutely!

I say "Never let the inmates run the asylum." What I mean by this is that if you ask N developers what tools you should use, you’ll probably get N^2 answers. Developers usually shouldn’t make these decisions on their own. (I say this as a developer. I’d love to use Pascal or FORTRAN or assembler or FOCUS again, but would it make any sense???)

When I was 23 or 26 I always wanted to play with the cool new stuff, just like the 23 or 26 year olds today. What I didn’t have then was perspective on what works well over a long period of time. Now that I do, I know that the Wild West approach may seem like a good idea to some, but it’ll cause tears sooner rather than later.

Almost every organization I’ve worked with has an architect role or office that screens new tools and makes suggestions or edicts. How tightly the organization is tied to those suggestions or edicts depends on what type of organization it is. If a bank said "Let’s use whatever we want in our ATMs.", that would be a real problem. On the other hand, in an R&D environment, considerable leeway may actually make sense.

The academic model says "Let’s try anything and see what works."; the corporate model is usually more like "Let’s look at our overall strategy [the business strategy, not the IT strategy, at least first] and determine which tools will be the most productive and cost effective."

Cost effectiveness takes many forms: can we produce high-value solutions quickly and reliably, can we scale up our staff rapidly if we need to, can we keep up with new software versions reliably, can we build good institutional memory for maintainability, can we support our solutions, can we train our users fast, etc.

Now I’ve seen most large organizations totally blow it on this. The controls end up being the goal rather than the effectiveness and cost management. This usually leads to skunk works, or some other form of "cheating" to get things done. Letting the pendulum swing to far that way can actually cost *more* than the Wild West approach.

All organizations should have some sort of guidelines at least, and in most cases, stringent rules. As with SharePoint governance, there’s always a lot of "it depends", but there will be a right set of answers for every organization, and some sort of control is what is going to make the organization succeed in the long run.