Force Left Nav To at least 200 Pixels wide
Force Body To at least 500 Pixels high
SharePoint MindsharpBlogs > Ben Curry, CISSP, SharePoint Server MVP
http://www.microsoft.com/learning/en/us/books/12197.aspx
http://www.microsoft.com/MSPress/books/10623.aspx

 Last 10 Posts

Nov 15
Published: November 15, 2009 12:11 PM by  Ben Curry
I'll be speaking at the ATL users group tomorrow night at 6:30 EST. For directions and more info, see http://www.atlspug.com/default.aspx.
 
The topics are:
 
Session 1 Installing SharePoint Server 2010
Topic Description

Much has changed from the 2007 version of SharePoint. I'll be discussing a server farm installation of SharePoint Server 2010 to include the new Shared Services model (service applications), how those will upgrade, and limitations of 2007 and 2010 integration. Just for fun, I'll also give you a quick demonstration of building service applications and configuration using Central Administration and PowerShell!

Session 2 Enterprise Content Management Upgrades in SharePoint Server 2010
Topic Description

Wow! We have some really cool features that are new to SharePoint Server 2010 - DocumentIDs, robust Information Management Policies, and Document Sets. BUT, one of the most anticipated features is the centralized taxonomy and content type hub. Come see a live demonstration and early best practices for creating a content type hub and managed metadata service.

 
My apologies for posting this late. I hope to see you there!
 
Ben Curry


Nov 03
Published: November 03, 2009 10:11 AM by  Ben Curry
I'll be posting more about the upgrade process, but when you get the 2010 bits, be sure to already have SQL Server 2008 Cumulative Update 2 installed whether it's a new or upgraded install, but especially an upgrade. It kind of wrecked my upgrade when I had to re-start the config wizard after applying CU2.
 
 
-ben


Oct 29
Published: October 29, 2009 22:10 PM by  Ben Curry

One of the first things many administrators will notice on a new SharePoint Server 2010 install is the lack of a Site Directory (well, lack of a Collaborative Portal template completely). If you've upgraded from 2007, then your current Site Directory will still work - otherwise, you're out of luck, kind of...

I got to looking around Central Administration:

 

and I noticed that we still had the Site Directory configuration settings. It looked just like 2007. So, how do we get the Site Directory back?

I first tried just unhiding the site template in C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\TEMPLATE\1033\XML\webtempsps.xml:

Just change the Hidden="True" to Hidden="False"

But when I went to create a new site, it didn't show in the available templates. I also noticed in the xml (what appears to be a new function) VisibilityFeatureDependency= 5F3B0127-2F1D-4cfd-8DD2-85AD1FB00BFC. Hmmm... so what if we activate that feature on the Site Collection where we want to host our Site Directory? Yes - that works! You activate the feature by:

stsadm.exe -o activatefeature -id 5F3B0127-2F1D-4cfd-8DD2-85AD1FB00BFC

    -url http://portal.contoso.msft

By the way, that feature is called PortalLayouts, if you are curious.

Getting there from scratch

Now, the easiest way to get a site directory is by first installing the feature, and second by creating the site. I created a Publishing Portal as the top level site:

Next, activate the feature on the Site Collection

stsadm.exe -o activatefeature -id 5F3B0127-2F1D-4cfd-8DD2-85AD1FB00BFC

    -url http://portal.contoso.msft

Finally, create the sub-site via stsadm.exe (or you could do it from the UI if you unhide the template)

stsadm.exe -o createweb -url http://portal.contoso.msft/sitedirectory -sitetemplate

     spssites#0 -title "Site Directory"

You now have a Site Directory!

Cheers, and I hope this helps.

Thanks to Jim Curry for hacking around a bit with me to solve this.

Ben Curry

SharePoint MVP



Oct 17
Published: October 17, 2009 22:10 PM by  Ben Curry
I'll be at the Mindsharp booth talking about just about anything you want to chat about. I'll be in late Sunday night. I'll be blogging 2010 content starting on tuesday, but you can follow immediate details as they are release from NDA on twitter @curryben.
 
cheers, and see you in Vegas.


Aug 08

I remember Steve Ballmer screaming ‘developers, developers, developers’ back at TechEd (I think it was TechEd, don’t remember the year). That’s what I feel like when I meet with clients talking about Best Practices - SOLUTIONS, SOLUTIONS, SOLUTIONS!! Just say yes to solutions. Maybe I can get Todd Bleeker to do the SOLUTIONS rant and record it! That would be awesome :-)

So if solutions (WSP = SharePoint Solution Package) are so important, why don’t all developers use solutions? Because they either don’t know how, think it’s too difficult, or don’t want to take the time to package them. Before we look deeper into this bad practice of not using solutions, let’s look at WHY it’s a bad idea. First, let me describe a scenario I’ve been through multiple times with clients in the real world:

Real World Scenario from an Admin Perspective:

Server is toast, or worse – farm is toast. So, you followed the best practice of scripting your farm install, so you restore your databases and script the servers back to life, but you get NOTHING when you browse to the site. You check IIS,all of the Web applications and app pools - they look fine. SO, you monkey with the install, still – nothing. Argh… OH! I forgot we had a custom site definition. So, you copy the site definition from a backup (you now restore to a temporary server and go make coffee and play some Tomb Raider while the server restores) and the page renders. What? Now, I have lots of Web part errors. Oh, the Web parts were all manually installed. But, they don’t work correctly after you get them back installed. Argh… OH! They have dependent features that aren’t installed. Ok, you get the features back and no images? What? Are you kidding? Arghhhhh… Oh, I see, the custom site def has images that I know have to copy. So I feel like we’re almost there, except I have web.config changes to make, but at least the server is up. Ok, imagine this on a large scale and add lots of site defs and features.

One more thing, what if you needed to remove and/or upgrade items that were already deployed? How are you going to consistently remove it on all servers in a farm? It’s like hacking through the briars with a tire iron…

--end scenario--

While it might seem faster to simply deploy ‘stuff’ directly to the server, the long term supportability is really going to suffer. The problem we have encountered again and again is people have had initial success manually deploying artifacts, but over time as more items are deployed to larger number of web application and more servers, inconsistencies arise. These problems can be very difficult to diagnose because they are inconsistent, only occurring on the servers and web apps that have not been maintained correctly. The process of elimination to locate the offending items is often lengthy and the solution is the creation of WSPs which should have been created in the first place.

If WSPs offer so many advantages the obvious question that arises is don’t people always use them?  The answer is that creating a WSP will add extra steps to the development and deployment process and in the initial stages of SharePoint implementation it is seems simple to use other methods bypassing the learning curve that WSPs require. Consider the following situation which is based upon multiple real world incidents.

A single web part is developed using strict development best practices. That web part will require the following files and configuration changes:

  1. A .dll to be deployed to the web application BIN directory so that it can implement Code Access Security.
  2. A .webpart XML file that will specify the .dll, namespace, and class for the web part and defines properties for it.
  3. A Feature.xml file that is one of two files used to copy the .webpart XML file to the Web Part gallery of the site collection.
  4.  An Elements.xml file that is one of two files used to copy the .webpart XML file to the Web Part gallery of the site collection.
  5. A Safe Control entry for the web application’s web.config file that grants permission for the web part to run.
  6. Code Access Security policies that defines what the web part will be allowed to do.

 

When the number of servers and web applications is relatively small it seems easier to make these changes manually than use a WSP. Unfortunately as the number of servers and web applications increase the process become rapidly more complex. Consider the example of our simple web part example it requires 6 actions to deploy to a single web application on a single server.  But if we increase the number servers and web applications to 3 each the number of changes increases to 36.

  1. A .dll to be deployed to the web application BIN directory so that it can implement Code Access Security. (Deploy to 3 web app bin directories on 3 servers = 9 changes)
  2. A .webpart XML file that will specify the .dll, namespace, and class for the web part and defines properties for it. (Deploy to 12\TEMPLATE\FEATURES on 3 servers = 3 changes)
  3. A Feature.xml file that is one of two files used to copy the .webpart XML file to the Web Part gallery of the site collection. (Deploy to 12\TEMPLATE\FEATURES on 3 servers = 3 changes)
  4.  An Elements.xml file that is one of two files used to copy the .webpart XML file to the Web Part gallery of the site collection. (Deploy to 12\TEMPLATE\FEATURES on 3 servers = 3 changes)
  5. A Safe Control entry for the web application’s web.config file that grants permission for the web part to run. (Change the web.config file for 3 web applications on 3 servers = 9 changes)
  6. Code Access Security policies that defines what the web part will be allowed to do. (Change the web.config file for 3 web applications on 3 servers = 9 changes)

 

There are 3 more real world lessons learned that should be mentioned here. These problems are not limited simply to web parts and related files. Items as simple as images and as complex site definitions (the blueprints that detail the creation of SharePoint sites) are all effected by improper non-WSP deployment. Site definitions, field types, event receivers, workflow, features all must be deployed via WSP. In addition CSS files, ASPX pages, and Master Pages will need to be deployed via WSP if they are to be used farm wide. As a general rule, if the item in question will affect the entire farm it will likely need to be deployed via WSP.

WSP need to be used consistently. A commonly encountered problem is that of someone deploying a .dll via WSP and then manually replacing it as a bug fix. When a new server is added to the farm and the old WSP is deployed the servers are now out of sync. Similar problems could arise if it becomes necessary to rebuild a server.

Using WSPs and features can make deploying dependencies much easier. Consider the AJAX web.config feature that is available for free from CodePlex. The feature will automatically make the required web.config changes to allow AJAX Extensions to be used in SharePoint. The feature can be found here: http://features.codeplex.com/Release/ProjectReleases.aspx?ReleaseId=2502#DownloadId=17819 WSPs can greatly simplify disaster recovery situations, creating test and development environments that mirror production, or multiple geographically disparate SharePoint implementations. This can be enhanced further by the creation of a master script that adds deploys all WSPs consistently for the entire farm.

Thanks to Jim Curry for helping with the softie part of this blog :-) While I’m not a developer, I’m lucky to be surrounded by some top talent. You rock, Jim!

I’m already working on Bad Practice #3 – Implementing SharePoint with poor requirements.

Come see me at the Best Practices Conference, August 24-26 in Washington D.C.

Ben Curry / CISSP / Microsoft MVP / Know More. Do More.

Best Practices® Education, Training, and Conferences

https://bestpracticesconference.com

 



Jul 29
Published: July 29, 2009 19:07 PM by  Ben Curry

 

So, I'm going straight to the bad practice #2 because of conversations with some peers yesterday here in the UK. We see lots of blogs and articles about SharePoint Governance, and they are all very lengthy and probably applicable to most organizations. But, what I've seen is the average SharePoint administrator is the Exchange Admin, firewall admin, and sometimes the accountant! The point is - many folks don't have time to go through a lengthy governance process. But, we know what kind of trouble they'll get in without it! So, what's the answer? I call it 'Bare Metal Governance"

This is the bare bones necessities you need to cover for a successful SharePoint implementation. It isn't pretty or well explained, but will get you started in the right direction.

Item/List/Site Recovery – Who is responsible? How will you back them up? Does it work?

Versioning – How many? At least one for backup reasons? Who manages this?

Monitoring – You are monitoring your farm, Web apps, app pools, databases, drives, NICs, zones, firewalls, etc – right?

Reporting – How are you doing reporting on things like performance and security?

Developer Customization – How do you control developer customizations and custom code? Solutions? Features? Both? Ad-hoc? (I hope not! the latter!)

SharePoint Designer Customization – Does everyone have SPD? Is that a good think/

Windows Server configuration management – Who controls the configuration and change management of the server platforms themselves/

Server farm configuration management – How many farm admins do you have? Do you trust them? Are they trained?

SQL Server  - Are you monitoring uptime and performance? Are you using multiple databases where it makes sense? What types of drives do they live on? Are you mirrored/clustered? How do you test patches? What’s autogrow set to for logs and data?

Themes – Do you control how many / what themes are available in the sites/

Site Quotas – Do you control how large site collections are? This is the only way to control the 2nd stage of the Recycle Bin, right?

Navigation consistency – Do you need a consistent navigation story for both global and current? How will you accomplish this consistency?

Recycle Bin settings – How large is your 1st stage? Who sees it? How large is the 2nd stage? Who manages and restores from the 2nd stage?

Upload size – What’s your maximum upload size? Why? Will IIS timeout over WANs or sluggish VPNs?

Site and Site Collection Creation – Who creates Site Collections? Sites? Who can delete them? Manage them? Authorize access?

How will your users authenticate? Multiple AuthN sources? How will you accomplish that?

Security - Farm level – Who’s in command? How are you auditing that?

Security - Site Collection Level – Who controls security for site collections? How are you sure/

Authorization Mechanism/training – Do people know how to authorize access within your organization. Are they following the proper procedures, like need-to-know or FOUO?

Search  - Farm/SSP Level config and change management – Who Controls Search management? Don’t get your search management mangled

Search - Site Collection config and change mgmt. – Who is controlling the end user search experience? Keywords, best bets, Google ads, scopes, etc..

Document Creation/Publish/Mgmt, etc – How do you control findability keywords? Content types? Consistent metadata? Publication? Approval?

Metadata management (taxonomy) – What’s your taxonomy look like?

Content Types – Are your content types truly farm unique? Who defines and manages these?

Information Management Policies – Who controls and audits your IM Policies?

IIS Config mgmt. – Are you wathing your IIS configuration management/change management? Are you server admins messing with your IIS configs? Are you backing these up independently?

 

 

Myself and others will be talking more about these at the Best Practices conference coming up in Washington D.C. in August.

 

Ben Curry / CISSP / Microsoft MVP / ben.curry@mindsharp.com  / Know More. Do More.

 

Best Practices® Education, Training, and Conferences

https://bestpracticesconference.com



Jul 20
Published: July 20, 2009 20:07 PM by  Ben Curry

Best Practices: Back to Basics

I’ve received many emails from people wondering what I’ve been up to and where I’ve been. If you didn’t know, I run the Best Practices® conference company and consult with Summit 7 Systems. Both of these are in addition to teaching with Mindsharp. So, needless to say, I’ve been busy. 2009 has been an interesting year because I haven’t uncovered as many ‘best practices’ as I have ‘bad practices’. I’ll be covering some of these bad practices in a series of blog posts over the next few weeks, but I want to focus this blog on the basic premise of Best Practices® and how the concept increases your odds for success. Essentially, I’d like to lead you back to the basics of doing things the right way.

Note: At the upcoming Best Practices® Conference in Washington D.C., I’ll be presenting the Top 10 Bad Practices and what you can do about it.

I’ve many customers that struggle with implementation and support of best practices because of organizational politics, budget constraints, and culture. While most of the SharePoint administrators and developers I work with want to implement best practices, they face impediments and many times just give up. When this happens, one of two outcomes is often the case:

1.       SharePoint doesn’t work like it should and the stakeholders basically come to the conclusion that ‘SharePoint doesn’t work for us’.

2.       The stakeholders believe 'SharePoint is a bad piece of software'.

In either instance it's difficult or impossible to then get a successful implementation. There will be too much negativity about the software, in general, and most likely another tool will replace SharePoint. BUT, will the next tool be any better? Probably not! Now, I've seen another tool work because a new project manager is assigned the task that has more political power, is a better architect, more completely understands the culture and change a new tool will bring, or all of the above. While that new PM might very well successfully implement a new tool, they would have gotten SharePoint to work as well! So, what can you do to ensure a successful implementation? Get Back to Basics!

What is Best Practices® all about?

Best Practices is about doing it the right way. Part of doing it the 'right way' means adapting to the surrounding environment of business, culture, politics, requirements, and security. One of the primary foci of Best Practices is to encourage professionals to think about the best method to accomplish a task, no matter how large or small. While Best Practices is mainly focused on large and complex environments, I honestly believe they are needed by everyone. This is where Best Practices may be very different between verticals depending on the technology, size, scope, culture, and complexity of the problem. Best Practices must be adaptable.

Best Practices should be intellectually simple. The practical application is usually not.

Let me give you a practical application of that. A very common bad practice I see is not using solutions for custom code and farm modifications. The best practice is to use solutions to package everything possible when customizing the server farm. If you've already deployed lots of custom code, then this could be quite a lot of work re-packaging your code in solutions. But, what if you had to rebuild the server farm over the weekend and the developers who wrote the code were out of pocket? I can tell you from personal experience that you'll restore servers from tape to temporary hardware. You'll then spend hours (maybe days!) finding all of the dependent artifacts (think - features, images, web.config, etc) one by one until you think everything is put back in place. Even then, you can almost be assured something got missed. A Web part, an IIS settings - something that you'll get a call from the helpdesk on Monday.

I only put this example out there because it's a real world bad practice due to budget constraints and prior poor system design. I do get it - it's difficult to go back and re-tool your SharePoint implementation. But, it's a lot like getting exercise "Short term gain, Long term pain" OR "Short term pain, Long term gain". We want the latter. It's soooo very nice to rebuild a server farm when all of the custom development was packaged as solutions. I like to make an uber-solution that we can deploy after the configDB rebuild. More on this in another blog….back to basics today.

The point is best practices sound really simple and folks glaze over with boredom sometimes when we talk about them. But, the real challenge is digging into the best practice and finding the best way to implement. By the way, this is what the Best Practices® Conference is all about. We help you with the difficult practical application.

While it’s true SharePoint won’t create world peace, solve world hunger, or solve the budget deficit, it can increase the overall efficiency of your organization. As a general rule, when folks decide to implement SharePoint without outside help or training, it fails. At best, it becomes a glorified file share that has simply increased the cost of storage and operational support!

So, what can you do about it?

First, realize that SharePoint is a tool, and should be used to fix problems it was intended to do so. Second, don’t try to fit a square peg into a round hole. Things like CRM, accounting, portfolio management, and relational data management probably don’t belong in SharePoint. When you try to make SharePoint fit every IT need you have, it’s a sub-optimal experience for all.

Here are some Best Practices basics you should always have in any IT project:

1.       Get the stakeholders involved

2.       Gather requirements from the business people (the more interviews, the better)

3.       Create a project plan

4.       Get some training!

5.       Engage the services of an architect if you don't have one on staff

6.       Create an IT Governance plan for the project

7.       Prototype solutions

8.       Create a Test and/or Development environment

9.       Execute a test plan

10.   Define Service Level Agreements

I'll spend more time as 2009 goes by talking about each of these in detail…stay tuned.

Ben Curry, CISSP

 



Feb 24
Published: February 24, 2009 15:02 PM by  Ben Curry

I fully realize this is a stab in the dark at the actual Top 25 (they are in no particular order), but it is a compilation of questions from customers, students, conferences, blogs, and emails about the SharePoint Server 2007 Best Practices book. Additionally, I am not talking about development topics, because that would a whole 'nother animal (and I am not a developer) In other words, if you disagree with them actually being the top 25, that's ok :-) Because it is impossible to list every design variable for every SharePoint Server 2007 installation, I'm basically going to explain how to find the answer for your implementation. You will be provided with a foundation to go research each of these design questions for your environment. If you want to know more about these, come see us at the conference! I'm sure you can get your answers there.

#1 - Should I migrate all of my content to SharePoint Server 2007? A common mistake is moving lots of file share content, from tens or hundreds of files shares and systems to tens or hundreds of SharePoint Server 2007 sites, without a plan. If you move disorganized content to SharePoint Server 2007 without a plan, you will simply have disorganized content in SharePoint Server 2007! Except now you have probably tripled your per-bit cost! Part of your content migration plan should be an information architecture design. More importantly, you must educate your users on the correct way to store and retrieve content, or your well-laid plans can quickly erode. Planning and Governance are critical to successful content migration. Otherwise, you will simply have CHAOS! If you can, check out Joel Oleson's SharePoint Governance: From Chaos to Success in 10 Steps at 2009 Fall Dev Connections (he'll be co-presenting with another awesome MVP Dan Holme). Also, Robert Bogue has some great stuff on Governance (older, but useful Part 1 and Part 2, and a newer one with a presentation) , as does Mark Schneider (Mark also has Taxonomy tips and tricks, including 'When Taxonomies are Evil'). Train your users how to migrate data, tell them what to migrate, and archive the rest. I'll challenge you that less than 50% of your current file share content is actually needed. So go ahead and delete it! Whoa you say? Don't worry, I wouldn't delete it either! Not me! No way! Unless you have great metadata on your unstructured file share content (and I bet you don't) then the only folks who know whether or not you need the content are the Data Owners. We're all afraid to delete someone else's data for fear it will then be needed. File shares aren't dead, btw - I've seen that SMB 2.0 is greatly improved over the last version and will help with DFS over the WAN. This reduces the driving factor in some organizations to move everything into SharePoint for sharing. Basically - put 'stuff' into SharePoint when you need SharePoint functionality - like versioning, workflows, policies, templates, publishing, etc. If you aren't going to collaborate on the content, you might consider leaving it on a file share.

 

#2 - How large can my content databases be? That is a very common question that is mostly related to you service level agreements (SLAs). An SLA defines, among other things, the maximum time to return your application to service. If you do not have an SLA, you should ask the stakeholders how long your system can be down in the event of failure. You must take the maximum amount of time you can be down and calculate how long it will take you to restore a database in the event of a problem. For example, if your SLA defined four hours as the maximum down time, you would need content databases no larger than about 150GB with the average tape system on the market today. You should test your backup and restore speeds to a SQL Server instance to benchmark performance for your system. Once you have calculated the maximum size your content

databases can grow to, divide that size by the site quotas used in the Web application associated with those content databases. Here is an example of calculating content database size:

 

(Site Quota) x (Number of Site per Database) x (% of 2nd Stage Recycle Bin) = Maximum Database size

 

You must estimate your backup throughput, populate content databases with information and test in your environment. Nobody can tell you exactly what your numbers should be. But I can assure you that the default settings of 9,000 sites before a warning and 15,000 sites maximum are unlikely to be accurate in your environment. If you thoughtfully set these, you will assuredly have multiple content databases per Web application.

 

Another size issue to not overlook is database locking, which can cause blocking. Microsoft has recommended that databases not be larger than 100GB, but it seems they are simply hedging their bets in regards to database blocking. Essentially, that limits the I/O of SQL Server and reduces the chance blocking will occur. I have recently confirmed that large site collections are a bad idea and can cause database blocking. Imagine this - You have 200 sub-sites in a Site Collection. Because the entire Site Collection is a single table, a large transaction that must lock the table now blocked all 200 sub-sites! So, use your head when architecting databases/site collections and don't smoke crack. I've even said you can have monster content databases in the past - I was wrong. The only way I would now architect large content databases would be for fairly static data that did not have a large collaborative user population. Joel blogged on blocking/locking a bit and so did Mike Watson. If you want to know more - get them to write more about it. They know tons more about the issue than I. Beware you won't get event or trace errors when blocking occurs because nothing is wrong. If you are getting errors, you may simply have an I/O overload on SQL or WFE Server.

 

#3 - How many Web applications do I need? This will be very different for every installation, but there are some general guidelines to follow. A good rule of thumb is that fewer are better. Keep it simple and create new Web applications only when necessary. In the beginning, most organizations will have at least the following:

 

Portal - A Web application is usually created for your intranet, regardless if it is actually called a portal. It is a centralized, governed Web application where content is aggregated. Unless you have specific requirements to do otherwise, this is also a good place for your collaborative site collections.

 

Shared Services Provider Administration Web Application - While it is not required that you have another Shared Services Web application to host Shared Services Administration, it is useful for the purposes of backup and restore and application isolation. This is not a Shared Services Provider! This is simply a Web application, with a site collection contained therein to manage your Shared Services Provider.

 

My Sites Web Application - It is also not required that you create a dedicated Web application to host My Sites. But doing so eases administration of My Sites in that you can leverage Web application permission levels, policies, and authentication for the hosting Web application. If you choose to host My Sites in another Web application, be sure to install the My Site Host template in the same Web application. This specialized site collection is used for default settings and for the crawler to index profile settings for people search functionality. I disagree with a couple of SharePoint folks I highly respect, in that they don't like a dedicated Web application. Here is why I think you should dedicate a My Site Web application:

 

·       You can easily leverage Web application policies to define security levels for all My Sites in a given Web application.

·       You can change the available permission levels to all My Sites from Central Administration.

·       You can more easily define content database design.

·       Backup and Restore is simplified because portal and team sites are not in the same Web application as My Sites.

·       You can create zones for the My Site Web application to allow modified access externally, via a different URL.

·       Your users can browse to the root (like http://my) and automatically be redirected to their respective My Site.

Central Administration - The best practice is to always have Central Administration in its own Web application. This is the default setting. You should not use Central Administration to host any other services.

 

Unfortunately, additional Web applications are often created due to politics within an organization. While a managed path is usually sufficient to meet a requirement, customers and executives sometimes drive designs that are needlessly complex. For example, you might have a Human Resources executive who demands a Web application named http://HR, when a sub-site or embedded managed path site collection in the corporate portal named http://portal/HR would work just as well. Another Web application usually means more resources, additional content databases, and additional IIS Server configuration. But even after explaining the benefits of not creating another Web application, you may still be forced to create the http://HR Web application. That's OK; just try to keep them to a minimum.

 

#4 - How do I enable intranet/extranet access to content? A major question from many is, "How can I securely access my content from either the intranet or Internet?" This is such an important topic that an entire chapter, Chapter 20 "Intranet, Extranet, and Internet Scenarios," was dedicated in the Best Practices book. But I'll at least cover the general concepts in this blog. First of all, you can extend an existing Web application, http://portal.contoso.msft, for example, to use another IIS Web application and additional URL http://portal-ext.contoso.msft. Using Web application policies and zones, you can restrict access based on the URL. While this isn't a bulletproof security model, it is useful for many organizations. There are other options as well, such as legacy virtual private network (VPN) access and, more recently, SSL VPN access. Even the most skeptical SharePoint security person (yes, I linked to Spencer Harbar - check out his blogs for SharePoint security info) would consider VPNs reasonable security. I like Layer7 VPNs like IAG and others. Very nice, no thick client, kerberos authN, etc.

 

#5 - What level of content type planning must I do? Content types are a very important part of SharePoint Server 2007. In fact, every Web page, document, task item, meeting request—virtually everything stored in the database—is a content type. You can use the default content types in the beginning and methodically expand your usage, but depending on your organization's policies, judicial use of content types from the beginning may be needed. An example of this would be requiring metadata collection as part of a content type. You may need to know if an item is confidential, secure, belongs to a division, or has a project identification code. You can always go back and tag items later with metadata values, but defining them in the beginning can make your content management easier down the road. My experience has been that you are better to use the defaults than to set them up incorrectly. One of the challenges of SharePoint document management is centralizing the control of metadata. Metadata is KING when architecting a SharePoint document management and/or Search and Findability solution. Note: I'll be presenting a pre-conference at Dev Connections on SharePoint Server Document Management Best Practices, November 10 if you are looking for an in-depth discussion. If you want to control metadata throughout your enterprise, I would consider a globally stapled default content type. This content type (or multiple if needed) can be automatically added to every library created. Here's a sample project you can download that does exactly this - Mark Ferraz's global content type stapler. I think this is way better than using custom list definitions. Basically, it is a user control that runs once when you create the library, adds the content type, then hides itself. Very slick.

 

#6 - Do I need an information architecture plan? The short answer is YES. Without some planning of the Web application, managed paths, and site collection structure, you could easily end up with a mess that cannot easily be fixed. Information architecture is a lengthy topic, and is covered in the Best Practices Book Chapter 7 "Developing an Information Architecture." For the sake of designing in the context of this blog, you simply need to gather input from the stakeholders on how your Web application, managed path, and top-level site structure will be. Try to help your stakeholders understand the importance of getting it right from the very beginning. A mistake with your information architecture in the beginning can make corrections later very difficult. Ok, it's almost impossible! DON'T OVERLOOK YOUR INFORMATION ARCHITECTURE! A simple google (or live.com J) query on SharePoint Information Architecture will yield a ton of results.

 

#7 - Do I need records management? If your stakeholders require records management for legal or regulatory compliance, then you should consider implementing a records center. Otherwise, you should attempt to manage your document life cycle in-place. Most organizations will be fine using information management (IM) policies via content types and lists. IM policies include auditing, labeling, expiration, programmatic workflows, time-based approvals, and barcodes. Creating a records center usually complicates your administration more than it resolves issues. If you do require a records center for compliance, plan for the additional Web application and Shared Services Provider needed for proper isolation. Why another SSP? Because you probably aren't supposed to have your official records indexed along with the rest of your content, and because you can only have one Index per SSP, you'll need another SSP for the sole purpose of hosting a Records Center index. This will allow you to place holds on large numbers of records via Search.

 

#8 - Do I need search? This should be an obvious Yes in this day and age, but many folks overcomplicate this in the beginning. You needn't have a robust search topology and plan before implementing SharePoint Server 2007. Search will benefit you greatly, but don't let the fear of planning search stymie your plans for SharePoint Server 2007. In the beginning, just use the native search functionality, and expand as your knowledge and requirements increase. One word of caution — because your users have been trained by Internet search engines to find what they need via search, you do need a reliable search center in the very beginning. You want your users to trust SharePoint Server 2007 search early, because otherwise it is very hard to gain back their trust. Trust me, if you are new to SharePoint Server just get search working with SharePoint content first.

 

#9 - Should I configure version pruning policies? You should decide what the official policy is on version pruning. If you leave it completely up to your users, they could turn on versioning with no limits. This action leaves you in the same state as SharePoint Portal Server 2003 and means there is no limit to the number of versions in document libraries. This is generally bad practice because it can dramatically increase your disk space usage. But before you freak out too much about versioning, your users are probably already versioning on the file shares. More often than not, we see important documents named 20 different things on the file share, each essentially a different version. You should decide how many major versions to maintain, how many major versions you will keep minor versions for, and what the security will be on each. (You can't limit the number of Minor versions - read the UI carefully - you can only limit how many major versions you will keep ALL minor versions for!) These decisions will vary greatly depending on your requirements, but at least one major version is recommended for content recovery due to user error and data corruption.

 

#10 - Will you allow users to modify sites with SharePoint Designer? With proper training, your users can modify sites with SharePoint Designer 2007 and produce very elegant, customized SharePoint Server 2007 sites. Without proper training, your users can break sites and pages, customize pages that should not be customized, and affect overall server performance. A best practice is to provide the SharePoint Designer tool only after users have received the proper training. Check out Heather Solomon's blog for SharePoint Designer tips and tricks.

 

#12 - What content will you crawl? From a technical perspective, you should define what content sources you will crawl. You should always crawl your local SharePoint Server 2007 content including My Sites. But you may need to crawl additional sites from the very beginning, such as file shares and Web servers. Be sure to apply search best practices when doing so and plan for crawler authentication. Also, be careful when crawling file shares because you may expose information that was previously secured through obscurity. You need to include your business team when creating new content sources. Help them understand that SharePoint Server search is security trimmed, and it works very well J For example, let's say an administrative assistant was trying share salary information with executives, but could not figure out the correct permissions. We all know what happens next, right? Of course! They could just say 'Everyone, Full Control'! So then the crawler indexes that information, and 'knows' that everyone has access to that search result. Common users could now see all of the salary information. The best practice here is to run an ACL audit before crawling any content outside of SharePoint.

 

#13 - How many Shared Services Providers will you have? You should plan for the number of Shared Services Providers you will have. Most installations should have only one. You can safely assume the best practice is s single Shared Services Provider. If you are not sure know why to create more than one—don't. If you still want to create more than one, read the SSP section on TechNet, or the SSP chapter in the Best Practices book.

 

#14 - Who will create new site collections? This is really part of your governance strategy, but suffice it to say that SharePoint Server 2007 was designed to allow users to manage their own destiny in regard to workspaces. Your goal should be to train users and allow them to create their own site collections. If you choose to do otherwise, you should seriously consider training a set of site collection administrators to perform the creation and management. Otherwise, the IT department will end up with more work than they can do, and delay site collection creation for users. I'm not saying all users should have the ability to create Site Collections, but I would train folks outside of your SharePoint administration staff to do this. Trust me, you'll be glad you did.

 

#15 - Will you enable incoming e-mail for lists? Incoming email is a very cool feature that allows you to define an email address for a list, thus enabling inbound email to that list, i.e. Account.Doc.Libary@contoso.msft. But, enabling incoming e-mail for lists and libraries isn't as simple as selecting the option in Central Administration and the target list. You must install an SMTP server, configure DNS, and allow the proper security in your network and e-mail server. Additionally, users get to create these Contacts (that's all they are) in Active Directory automatically if you enabled the Directory Management Services, and name them anything they want. So do yourself a favor, and create a dedicated OU for these contacts. But if you don't enable the directory management service, you will have to manually create each entry for every mail-enabled list. You should work with the respective teams and explain the functionality and requirements of incoming e-mail. Incoming email is a very cool feature for sending meeting requests to calendars, discussion lists, workflows, and more. But before you implement in your production server farm, be sure to test it first! It would be sub-optimal if a document library or list started receiving spam J There are several security settings in Central Administration to limit the email source. Additionally, I would limit the server that could send mail via Windows Server and your email server. One last thing, custom lists can't be mail-enabled by default. You'll have to get your softie to code it up for you.

 

#16 - Will you mail-enable SharePoint groups? Mail-enabling SharePoint Server 2007 groups allows SharePoint to create and synchronize Active Directory distribution lists with your SharePoint groups. Have you ever wanted to mail all contributors for a site collection, but you had to enter each person individually? Well, mail-enabling SP groups will create a DL and keep it sync'd has you add users to the SP group. Be forewarned that while there is an approval mechanism in Central Administration, users can name these DLs anything they want. So much like incoming email, you should have these created in a dedicated OU in Active Directory.

 

#17 - Do you have workflows that should be created organization-wide? If you have workflows that are needed in all or many sites, consider creating the workflows in Visual Studio and deploying them as features. A best practice is to create workflows as needed, and only deploy globally after verifying their need and functionality in a prototyped site.

 

#18 - Who will manage your code access security? Code access security (CAS) is widely regarded as a developer responsibility and not an administrator responsibility. But the best practice has been proven to be the opposite. Developers (of course, none of the SharePoint MVPs before you start to flame me ;-) ) often create code in a "full control" environment to ease application development. But writing code with no security boundaries can be a vulnerability when deployed. You need to decide who will manage code access security and how it will be audited. I say it's the Operations staff that manages CAS. Check out Brett Lonsdale's CAS blog here.

 

#19 - What logging and auditing policies do you need? As outlined in the Microsoft Share-Point Products and Technologies Administrator's Pocket Consultant, defining and setting logging and auditing policies is an important exercise when implementing SharePoint Server 2007. If you don't set your policies, the defaults are rarely enough to help when a problem arises, yet impact server performance. Don't simply set your logging levels to Verbose; you should make informed decisions about logging and auditing settings. Many SharePoint Server 2007 administrators set logging levels only to report errors, and increase the level of auditing when troubleshooting an error. This has proven to be a good starting point. If you want an easy to use script to set your logging, check out my free logging excel sheet generator on Mindsharp's premium content area. It's free, you just have to register. This script will include the 60something hidden logging values as well (like all the cool Search tracing and event logging counters).

 

#20 - How will you monitor your solution? You should decide what to monitor, and to what level you will monitor services in your SharePoint Server 2007 server farm. Too much system monitoring, and you could miss important facts because of too much information. Too little monitoring or the using wrong performance counters will have the same

result. At the 2009 Best Practices SharePoint Conference, we'll have presentations from Mike Watson on capacity planning and monitoring. It won't get any better than that!

 

#21 - How will you backup and restore your content? The best time to plan for content recovery is before you implement SharePoint Server 2007. Much of your content recovery plan depends on how your SharePoint Server 2007 server farm is implemented. A common bad practice is trying to force stringent recovery objectives from system that was poorly installed. Doing so is a lot like trying to get a Yugo to perform like a Ferrari! If you installed via the default options, use native backup tools, and ignore SQL Server transaction logs, you are most likely assuming a 24-hour data loss in the event of SQL Server failure. If you aren't moving your backup media off-site, then you are assuming a total loss of data. Can your company sustain a total loss of data? 24 hours? These are some of the questions you need to answer before implementing SharePoint Server 2007, or at least before moving business-critical content into SharePoint Server 2007. First, you must define where the valuable content resides, or will reside. If SharePoint Server 2007 is simply a front-end dashboard for back-end business data, then you will be more concerned with getting SharePoint Server 2007 back online in a failure, and less concerned with the loss of SharePoint Server 2007 content. In this example, your primary recovery target would be the back-end business data. Likewise, you must design for accessing your content. If you require immediate access to your data, then solid backups to tape may not be sufficient. Instead, you may need to plan for disk-to-disk backups, or create a mirrored instance of your farm altogether. Unless you have a very simple installation, your data protection and recovery plan will require some preparation. Often, it isn't a planning process that you can do alone. It will require discussions with the data owners and stakeholders to understand the criticality of the data, and what the expected availability is. The two key concepts to keep in mind are:

 

Recovery Time Objective The recovery time objective (RTO) defines how long your system can be down before it is back online after a disruption. The disruption could be due to anything from a SQL Server outage to a WFE Server failure. You don't have to have the same RTO all of the time. For example, a bank might have a very short RTO from Monday through Friday, 9 A.M. until 5 P.M., but a longer RTO for all other times. The RTO should include data recovery at the server, farm, database, site, list, and item levels.

 

Recovery Point Objective The recovery point objective (RPO) defines your data loss threshold, measured in time. If you run daily backups only and ignore the SQL Server transaction logs, then your RPO is 23 hours, 59 minutes, and 59 seconds. Any data written to SharePoint Server 2007 after you ran the backup cannot be restored via native tools until after the next backup. Many organizations assume this risk without fully understanding the impact of losing 24 hours worth of data. Check out this blog for managing multiple SLAs within a single Web application.

 

There are many 3rd party backup products, like Quest, AvePoint, and Commvault, but carefully research and test these (they all work, but they each have their advantages - there's more too, I'm trying to leave anyone out intentionally). No matter what solution you choose (native SQL backups for content databases is also a good bet), you should test, test, and then test some more. I've never seen a disaster recovery plan work the first time. Don't let the first time be when you really need it! 'Pretend' to experience a disaster (not during production hours, of course!) and see if you can restore everything. Some of the trickier components to restore are the Shared Services, especially Search.

 

#22 - Should we migrate My Documents to My Sites? Many of you want to replace My Documents with SharePoint Server 2007 personal portals, also called My Sites. This isn't altogether a bad idea, but you need to carefully plan what content will be migrated. Remember that SharePoint Server 2007 has limitations on file upload size and file types, and drastically changing these can have negative repercussions. But My Sites are often a good starting place for an enterprise SharePoint Server 2007 deployment because of the immediate value stakeholders can see in work efficiency and collaboration. Large organizations have seen the value in Exchange Server installations, and My Sites are a natural extension to that in the minds of many executives. If your users currently store music, video, ISO, and other large file types, you should consider some type of file storage other than My Sites.

 

#23 - What should my farm topology be? Many administrators are concerned with the farm topology in the very beginning of their design. The truth is, your farm topology is almost always the last design consideration. You should start with the end-user's experience with the product (it is, after all, an end-user product) and design toward the farm topology. You should first decide your information architecture, Web application design, search requirements, security, governance stance, and user requirements. If you must buy hardware immediately, plan for a medium server farm topology. A medium farm topology consists of two WFE servers, one application server, and one SQL Server. Alternatively, you can continue developing on your prototype system and scale outwards as needed. Either way, your farm topology is changed with relative ease later. If you are really concerned with how you'll build your farm, start with a medium farm and edit these scripts J

 

#24 - Will I create custom Site Definitions? I think probably no for most. Eric Shupps says it well here. But instead of re-hashing what a bunch of SharePoint MVPs have already written and argued in pubic, check out this blog. It about says it all.

 

#25 - What are your security controls? Early in your design, you should decide what security authentication and authorization you will use. You essentially have two choices for authentication—Windows Authentication and Forms Based Authentication (FBA)—although other pluggable choices exist. Windows Authentication has the deepest functionality in SharePoint Server 2007, but support for FBA is rapidly gaining. If you can use Windows Authentication with SharePoint Server 2007, it is easier to install and easier to maintain. There are two types of Windows Authentication—NTLM and Kerberos. One isn't necessarily better than the other from a design perspective, but Kerberos is generally a better choice from a performance point of view, while NTLM is easier to configure and install. There are instances, however, where FBA is preferable. An example is a partner extranet where you want to authenticate users against a Line-of-Business system. Carefully test your SharePoint Server 2007 functionality before production use, when using FBA.

 

Ben Curry, CISSP, SharePoint Server MVP

Mindsharp

http://mindsharpblogs.com/ben

 



 ‭(Hidden)‬ Admin Links