Tag Archives: SSL

Securing Tomcat

So we have this wonderful application, “Crystal Reports Server 2008 V1”. Property of SAP, fomerly property of “Business Objects”, formerly property of Seagate Software, formerly… nevermind.

CR Server claims to run on IIS, so we had hoped to deploy the new version without needing to add the usual pile of unmaintainable WAMP cruft. Unfortunately, it turns out there are bits of it that out-and-out require a J2EE web server, such as Tomcat. Rather than manage IIS and Tomcat, We are just going to run with the cat. Good decision? Meh…

Here is the quick-and-dirty on some simple security measures we took to protect the app:

  1. Remove the sample and documentation webapps from the Tomcat server “webapps” directory.  Delete jsp-examples, servlet-examples, tomcat-docs.  Consider also removing ROOT, balancer, webdav, if not needed by your application.
  2. Use the java “keytool.exe” to create a java keystore.  Create web server key pair in the keystore, then generate a CSR from the pair.  Have the CSR signed and import the signed cert back into the same keystore alias.  The best doc on CSR generation and import is probably this one:


    But note that you may need to specify a “keyAlias” parameter in your Tomcat server.xml , if you did not use the alias “tomcat” when generating your key.  This tidbit is not in the documentation.

    Also note that you likely will need to add the parameter “-keylength 2048” to your “genkey” command, if you want to ensure a key length of at least 2048 bits (most modern CAs will not issue a cert less than this length). Additionally, you can add the “-storepass” parameter towards the start of your command string to help with automation of future commands within the same shell session.

  3. Using the same doc as above, update the stanza in the Tomcat server.xml to use TLS.  In the stanza, include the following additional parameter:


    This will prevent the use of weak cryptographic ciphers.

  4. To the Tomcat web.xml config file, add a “security-constraint” stanza, as documented here. (can’t post the code directly to stupid wordpress because it clobbers XML.)This will enforce the use of SSL for all web apps running in tomcat.
  5. If you want a redirect from standard HTTP port 80 to the secure port, add a second connector stanza to “server.xml”, with the following parameter:


WSUS and Secunia CSI – Deployment Hurdles

After many years of saying that we should do it, we have purchased a third-party patch management solution.   The good folks at Secunia cut us a great deal, and we are not in the process of deploying Secunia CSI, which will be integrated with our existing WSUS (Windows Server Update Service) instance.  Here are some notes on the installation process:

WSUS SSL Configuration:

Secunia CSI uses the “System Center Update Publisher” (SCUP) API to publish third-party update packages into WSUS.  SCUP requires that SSL be implemented in WSUS.  Unfortunately, we had not done that yet, so there was some work to be done before we could even connect the CSI console to our WSUS server.

  1. Group Policy for Windows Update needed to be updated point clients to the HTTPS version of our WSUS host.
  2. The WSUS host had to be reconfigured to require SSL for the “ApiRemoting30”, “ClientWebService”, “DssAuthWebService”, “ServerSyncWebService”, and “SimpleAuthWebService” IIS virtual directories.  Also, WSUSUtil needs to be run to signal the update service to use SSL, as documented here:
  3. After making these changes, we found that the WSUS management console and Secunia WSUS integration utilities were failing to connect to WSUS (fontunately, Windows Update clients were not having this problem).  The management utilities were reporting “access denied” messages, which apparently were related to authentication failures.  This fix for this was not obvious, not did we find it documented anywhere.  Since our WSUS server is using an externally-refereneable “Internet” DNS name, we found that we also needed to set an SPN for the service that was running WSUS (in this case, the Network Service).  The command follows:
    setspn.exe -A http/[wsusDnsName] [wsusHostName]
    (where wsusDnsName is the DNS name on the WSUS server SSL certificate, and wsusHostName is the short computer name of the WSUS server Active Directory account.  You can verify that the SPN was added successfully using “setspn -L [wsusHostName]

Secunia CSI Signing Certificate Configuration:

Microsoft’s SCUP API requires that a code signing certificate is present in WSUS for the signing of packages imported into WSUS.  The Secunia CSI console will create a self-signed certificate for this purpose, but the cert will have a non-configurable 512-bit key length, and this cert will have to be pushed to the “Trusted Publishers” store of every WSUS client.  I prefer to use our AD CA Servers to generate a Code Signing cert.  This scenario is supported by Secunia, but it is not well documented.  Here is what we needed to do… it was not fun:

  1. On our CA server, create a new Certificate Template by cloning the “Computer” template, and making the following selections:
    1. Select “Server 2003” as the compatibility level for the template.  “Server 2008” templates apparently are not compatible with the SCUP API (or so I have read, and I see no reason to take chances).
    2. Set the minimum key size to “2048” or longer.
    3. Set “Subject Name” to “Supply in the request”.  Corresponsingly, you should set “Issuance Requirements” to “CA certificate manager approval”.  Under “Extensions”, select “Application Policies”, and add “Code Signing”.  Remove all other policies.
    4. Under “Security”, add the computer account of the WSUS server, with Read and Enroll rights.
    5. Configure the CA to issue certs from this new template.
  2. Use the Certificates MMC on the WSUS server to request a code signing cert for the local computer account.  Failing this, you can assign rights to the template to a user account, then use the CA web interface to request a certificate.  You then can transfer the cert to the local computer account cert store.
  3. The CSI certificate import tool is pretty fussy about the certificate format, and the requirements of the tool are not documented.  We found the following requiments, once met, allowed for successful signing of packages:
    1. The certificate needs to be exported to a “pfx” file (PKCS #12 format) before import via CSI
    2. The PFX file must contain only the signing cert, and no other certificates in the signing cert chain.
    3. The PFX file must not be passphrase protected.

    So what are we to do? The certificate management GUI on windows does not allow the export of PFX files with no passphrase! The solution is found in “CertUtil.exe”:

    1. Use “certutil -store” to list the certs in the system account personal certificates store. Identify the serial number of the code signing certificate.
    2. Use “certutil -exportpfx My [CertSerialNumber] [exportFileName] NoChain,NoRoot”
  4. Now import the PFX file  that you just created using the CSI Console.  It should work, unless I have missed something else important.

CSI Console Configuration on Server 2008:

Server 2008 / 2008 R2 security settings will cause some problems for the CSI console, unless you make the following configuration changes, as documented in the CSI FAQ, here:
and here:

To summarize:

  1. Add https://csi.secunia.com to your “Trusted Sites” intenet security zones (in the Internet Settings control panel).
  2. In Internet Options → Advanced → Scroll to Security → Uncheck ‘Do not save encrypted pages to disk.

F5 Load Balancing – Performance (Layer 4) mode and Windows Server 2008

NOTE: This article has been updated with a correction to the NetSH commands. Previously I documented the “forwarding” should be enabled on the interfaces, but “weak host receive” and “weak host send” is more accurate, as documented here:

Recently we had a problem with a web applicaiton configured for SSL-offload on our Load Balancers.  Our F5 Guru (Ben Coddington) recommended that we swich to a “Layer 4 forwarding” configuration.  In this mode, the F5 will forward TCP packets from the client directly to the web server without altering packet content, which is just what we needed.

Making this work on Server 2008 took a bit of extra leg work, though.  Here are the bones of it:

  • On the F5, create a new Virtual Server using the Type category “Performance (Layer 4)”.  Make sure that address translation and port translation are disabled.
  • Create a new F5 Pool that uses a simple port 443/ssl health monitor.  You could use any of a number of load balancing methods, but I cose “Round Robin” because it is in keeping with the “simpler is better” school of thought.
  • On the Server 2008 system, add a “loopback adapter” in the Device Manager.  (At the root of the MMC console, right-click the computer and select “Add legacy device”.  It will be of type “network adapter”, from manfacturer “Microsoft”, and have a name containing “loopback adapter”).
  • Assign the load balanced IP to the loopback adpater with netmask “”.
  • Here is the trick… you must now allow “weak host receive” on all network interfaces involved with load balancing on the Server 2008 system, and “weak host send” on the loopback interface. If this step is skipped, the Windows server will drop all packets destined for the load balancer address:
    • netsh
      interface ipv4
      set interface "Loopback Connection" weakhostreceive=enable
      set interface "Public Network" weakhostreceive=enabled
      set interface "Loopback Connection" weakhostsend=enabled
  • Make sure you have a vaild SSL certificate configured on all RDGateway systems in your farm.

That’s about it… The F5 will forward all packets sent to the load balanced IP to the next pool member in the rotation (barring persistence).  The Server 2008 host will receive the packet, and forward it to the loopback adapter (following TCP/IP routing logic).  The Server 2008 host will reply directly to the client.  Amazingly, it all seems to work.

HTTP to HTTPS redirect using Iconic URL Rewriter

So, you have a site that needs to to run over SSL-only (shouldn’t they all?)? You don’t trust your clients to type that ever-important “s” after “http” (and why would they?)? You think they will get scared off by those “Secure connection required” error pages (they will!)? You are not running IIS7 (who is?)? Not using ASP.NET?

In the past we accomplished this using a client-side redirect, by creating a custom 404.3 error page with a Javascript redirect. This worked well, but what if you client systems won’t support javascript (i.e. it is a webdav connection)?

Codplex to the rescue! The venerable “Ionic URL Rewrite” ISAPI filter has been updated, and published on Codeplex:
Thanks, Cheeso!

IIRF now supports the ability to return URL redirects, in addition to simple rewrites.  To use IIRF to redirect a non-SSL URL to a secure version, follow the installation instructions included with IIRF.  Then:

  • Stop your production IIS site from listening on port 80 and enforce SSL usage.
  • Make sure that the production site is not using host headers that would override your port settings.
  • Set up a secondary IIS site which listens on port 80 only.  Add the IIRF ISAPI filter to this site.

Here is some sample entires you could use in the IsapiRewrite4.ini configration file to accomplish the redirect.  Note that [R] instead of [R=301] also works, but this performs a 302 “Temporary” redirect.  Conceptually I prefer a 301 (not that it matters because search crawlers are not hitting our Intranet sites):

Continue reading HTTP to HTTPS redirect using Iconic URL Rewriter

Sharepoint – farm build procedure

After a semi-disaster with SharePoint earlier this week, I have been forced into the view that I really should have our SharePoint infrastructure hosted on more than one web server.  To that end, I am planning the deployment of a new, 2+ node Windows SharePoint Services farm.

Initial architecture will be something like this:

  • Host: SharePoint2
    • Roles:  Web front end, Search Server query and crawl, ECTS ADAM Instance
  • Host: SharePoint3
    • Roles:  Web front end, Search Server query and index, ECTS ADAM Instance
  • Hosts: WinDB1 and WinDB2
    • Roles:  Back-end SQL Database failover cluster
  • F5 Big-IP Local Traffic Manager (hardware load balancer)

Once initial rollout is complete, we likely will want to add:

  • Hosts: WinDB1 and WinDB2
    • Reconfigured in a SQL mirrored configuration

Here is an outline of the SharePoint2/3 build procedure:

  1. Install Server 2008 x64 Standard OS
    1. Activate Roles:  IIS (with ASP.NET support), AD Lightweight Directory Services (AKA AD LDS, AKA ADAM).
    2. Activate Features:  .Net Framework 3.0, PowerShell, SMTP Server
    3. Activate Feature “Desktop Experience” if you want access to “cleanmgr.exe” (Disk Cleanup wizard) and other desktop niceties.
    4. To save disk space, you might want to delete the hiberfile.sys (Hibernation file), by running “powercfg.exe /hibernate off”.
      1. Relocate the page file?
      2. Do something about WinSXS directory bloat (likely impossible without a service pack)
      3. Get IIS log files under control!
  2. Configure SMTP Server using IIS 6.0 Manager.  (FTP and SMTP Services cannot be managed with the IIS 7 Manager!)
    1. Configure BadMail and Drop Box directories so that they are not located on the System volume
    2. Configure to listen only on primary server IP
    3. Configure firewall to accept SMTP conections only from our In-Mail Gateways (using external firewall in our case, but this could be done with SMTP service settings and/or the Server 2008 firewall as well). <-DON’T FORGET – SMTP is allowed to the hosts, but need to go over the config with our DNS/mail admins!
  3. Install Search Server Express x64 bits:
    1. Perform “complete” install (Search Server will not install a SQL 2005 instance, as is the case with WSS installer).  Under “file location”, specify “E:Office12.0Data” as the index storage location.
    2. Skip running of the Configuration Wizard after install.
  4. Install SharePoint Administration Kit v2.0
    1. Exclude Profile replicator component as it will not work on WSS
  5. Clone the server as many times as deemed necessary. (At present, make one clone!).  Any cloned systems must be sysprep-ed before joining the domain.  Once preped, join the computers, configure networking.
  6. If planning to add this server to a load balanced cluster, install NLB feature:
    NOTE:  We will not be doing this as we plan to use F5 hardware LBs instead.

    • from “administrator” cmd shell, run “ocsetup NetworkLoadBalancingFullServer”
    • Don’t join to a production NLB cluster until SharePoint configuration is complete!
  7. Replicate AD LDS (ADAM) instance to new machine, if required.
    (We need this as we are extending an existing ADAM instance to the new farm)

    1. In Server Manager, Click on “AD Lightweight Directory Services” Role,
    2. Click “AD LDS Setup Wizard”
      1. Select “A replica of an existing instance”
      2. Name the instance “ECTSInstance”
      3. Accept standard LDAP ports
      4. specify a partnerpoint server to replicate from, use standard LDAP ports.
      5. Select the “OU=ects,…” partition set for replication (this should be the only partition!)
      6. Select secondary (non-system) volume as target for AD LDS data… generally this will be “E:Microsoft ADAMECTSInstancedata”
      7. Specify domain service account to run the AD LDS instance.
      8. Add “domain admins” to the AD LDS Administrators list.  Finish the wizard.
    3. Run the campus…bat file located in e:Microsoft ADAMECTSInstancedata.  This will register the Kerberos Service Principal Names required for LDP replication mutual authentication.
    4. Open the “Local Security Policy” Admin tool.  Add the domain service account to the “generate security audits” User Rights Assignment branch.
    5. Open the AD Users and Computers tool, locate the computer object on which you installed the Instance.  Give the LDS service account “create all child objects” to the computer object.
    6. Add the cluster load balanced SSL cert into the Personal certificate store of the ECTSInstance service account.
      1. Request wildcard certificate using the procedure outlined here:
        (We use the web interface for requesting a certificate, make user we use the RSA SChannel crypto provider to generate the request, use the “SHA-1” hash, use PKCS10 format, and use the “UVM – Web Server” request template.  For load-balanced LDAP servers, we must request a wildcard certificate (*.uvm.edu)
        NOTE: This step will not have to be repeated again until the current cert expires.  To add another AD LDS server, export the cert from a current server, import into the new server
      2. Export the request cert to file selecting “export all extended attributes” and “export private key” options.
      3. Import the cert into the “Personal” branch of the service account’s certificate store on the target server.  Make sure that you import “all extended attributes”, and the private key.  Do not select the use of advanced encryption password.
      4. Restart AD LDS and test SSL connections.
      5. If all is not working (as is the case with one of my two servers), here is where we get into undocumented territory.  Here are some helpful resources for debugging:
        1. I set SChannel diag logging to verbose :
          • HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSchannel
            REG_DWORD EventLogging, value 0x7
          • Restart ECTSInstance, look for “SChannel” entries in the server “application” even logs.  These logs will tell you which certificate the system attempted to use, and why access failed.
        2. You may need to add the wildcard cert to the Local Computer Certificate Store as well… run MMC, add the “Certificates” snap-in for “Service Account”, using the “ECTS Instance” service.  Navigate to the “Personal” branch, run an import action, import the wildcard with all extended attributes and the private key.
        3. Now locate the physical copy of this cert in c:programdatamicrosoftcryptoRSAMachineKeys (it will be the file with the most recently modified time stamp).  Add “read/execute” permissions to this file for the AD LDS service account, then restart the LDS instance.
    7. Force mutual authentication for replication traffic:
      1. Run ADSI Edit
      2. “Connect to”, enter the AD LDS server name in the Computer field, select the “Configration” well-known naming context.  As documented in http://technet.microsoft.com/en-us/library/cc794841.aspx, get “properties” on the “CN=Configuration…” partition, and change the value of “msDSReplAuthenticationMode” to “2”.
    8. Set local password policy – this controls password policy of AD LDS accounts:
      1. Add the Sharepoint server computer account to the “ETS – SharePoint Password Policy” GP Object.  After running “gpupdate /target:computer /force”, verify the settings by doing the following:
        1. Open Local Security Policy control panel
        2. Expand “Account Policies”->”Password Policy”
        3. Settings applied should follow the 1/365/0/8/Disabled/Disabled format.  (we may want to revisit this policy later).
  8. Run the SharePoint Products and Technologies Configuration Wizard:
    1. Connect to an existing Farm
    2. Enter “WINBD” as the database server.  The wizard will correctly select “SharePoint_FarmConfig” as the configuration database.  The correct service account username will be provided… you need to enter the password.
    3. Click “Advanced Settings”, specify that you which the server to host the Central Admnistration site.
      1. If setup fails with the error:
        “SharePoint Configuration Wizard failed with an exception “Error during encryption or decryption. System error code 997”
        A solution can be found here:
        Essentially we just run “stsadm –o updatefarmcredentials –userlogin “domainservice_acount” –password <thePassword>” on the first SharePoint server, then re-run the wizard.
    4. Update the “Central Admin” shortcut to point to the local Central Admin site by doing the following registry hack:
      Essentially, edit the key:
      HKEY_LOCAL_MACHINESOFTWAREMicrosoftShared ToolsWeb Server Extensions12.0WSS
      Then locate CentralAdministrationURL and change it to point to the local server.
  9. Configure Search Service:
    1. When Search is run in an environment where SharePoint services are accessed from a FQDN which is different from the physical host name (i.e. our environment, or any other environment with load balancers), you will need to work around the “loopback security check” feature of Windows.  Failing to do so will result in “access denied” errors in the crawl logs.  My thanks to Shawn Feldman for discovering this:
      The relevant work-around is documented here (see “Method 2”):
      We simply need to add the public FQDN of our SharePoint server to:
      Key: HKLMSYSTEMCurrentControlSetControlLsaMSV1_0
      Value: REG_MULTI_SZ, sharepoint.uvm.edu
      And then restart the IISAdmin service.
    2. Open the search admin page from SharePoint Central Administration:
      1. Access Crawling –> Content Sources
        1. Click the “Local Office SharePoint Server sites” default source.
        2. Define a crawling schedule for the SharePoint application
        3. Click “new content source” to add any additional content sources that are desired (i.e. our production file servers).
        4. Define additional crawl schedules for these new content sources.
        5. Set up “Crawl Rules” to exclude any directories from the content sources that you do not wish to have indexed.  In our environment, it was essential to exclude the “~snapshot” directories from the root of our NetApp file shares.
    3. Add Search Center to our SharePoint landing page:
      1. Create a new sub-site of type “Enterprise Search”
      2. Tune federated results by adding/removing web parts to the results page
    4. Add Federated Locations for additional search results:
      Ideally we would like to add results from out “GoogleWeb.uvm.edu” search appliance, and perhaps a Google “site” search for other close University partners

      1. For GSA (Google Search Appliance) Federation, it appears we will need to setup an RSS Transform:
        1. http://enterprise-code-samples.googlecode.com/svn/trunk/rss-stylesheet/Readme.htm
        2. As the readme above recommends, we set up a new “front end” on the GSA for RSS results.  We called this Front End “RSS”.  We edit the “base_url” value to point to http://googleweb.uvm.edu/Search?. We then visit this URL and perform a query.  We take the resulting query results URL:
          and we substitute the name of our Front End in the place of the “proxystylesheet” and “client” values.  I also simplify the query just a bit as follows:
        3. We then use the RSS query URL above as the template URL for a federated search query in the SharePoint Search Center.
        4. We edit the search results page to include federated results, and specify the “UVM GoogleWeb Search” federated location in the “Location” of the Federated results web part (using the “Modify Shared Web Part” command).
    5. Integrate Windows Search desktop tool with Search Server:<- Not yet done, but this can wait until after the production cut-over.

      1. Approve Windows Desktop Search 3.0->Windows Search 4.0 update on WSUS server
      2. Deploy Group Policy templates for Windows Search 4
      3. Configure:
  10. Install Infrastructure Update for WSS3 x64:
    1. Initiate the update on the first node in the cluster.
    2. When prompted, start the install on the second cluster node.
    3. When the configuration wizard completes on the second node, go back to the first and allow configuration to complete.
  11. Install Infrastructure Update for Search Server x64:
    1. Initiate the update on the first node in the cluster.
    2. When prompted, start the install on the second cluster node.
    3. When the configuration wizard completes on the second node, go back to the first and allow configuration to complete.
  12. Clean up IIS settings for the newly created Web Sites – configure binding, authentication and SSL  (Note that these procedures are only accurate when using Windows-native load balancers… when we transition to f5 load balancing, it will not be necessary to return custom errors from IIS as the f5 will handle HTTP-to-HTTPS redirections.) :
    1. SSL Cert Installation:
      Install SSL certs into “Personal” Store of the Computer account using the “Certificates” MMC snapin.
    2. Binding:
      Open the IIS Manager MMC snapin.  On each site, right-click and select “edit bindings”:

      1. For site “SharePoint – 443” (which represent the traditional “sharepoint.uvm.edu” URL), bind https and http protocols to port 80 and port 443, using the IP address for “sharepoint.uvm.edu” (  When binding SSL, select the appropriate cert from the “SSL Certificate” drop down menu.
      2. For “SharePoint – Internet” (which represents SharePointLite), bind https and http, ports 443 and 80, to “sharepointlite.uvm.edu”, IP  Again, select the correct SSL cert for this site.
      3. For “SharePoint – Extranet” (which represents PartnerPoint), bind https and http, ports 443 and 80, to “partnerpoint.uvm.edu”, IP, selecting the matching SSL cert once again.
    3. SSL Configuration:
      1. In IIS Manager, open the “features view” for each site.
      2. Double-click “SSL Settings”
      3. Check “Require SSL”, leaving the default “ignore Client certificates” setting.
      4. Now double-click the “Error Pages” item for the server root.  Add a custom error for 403.4 (SSL required), pointing to our custom “redirect.html” javascript file.  We will need to have copied this file into “c:inetpubcusterren-US” before completing this step
      5. Now find the applicationHost.config file for the IIS server.  This should be located in “C:Windowssystem32inetsrvconfig”.  Locate the section for each site that serves SharePoint content (i.e. <location path=”SharePoint – 443”>), then locate the <httpErrors> tag under <system.webServer>.  In the httpErrors tag, change the value for “existingResponse” from “PassThrough” to “Replace” (response “Auto” also seems to work, but may produce inconsistent results).  This will prevent ASP.NET from replacing the 403.4 error response from the IIS server.  I am much indebted to this forum thread for this breakthrough:
        Also helpful was the new “failed request tracing” module in IIS7:
        More information on the meaning of the various existingResponse values can be found here:
  13. Install the MS FilterPack 1.0 (Search Server can already index most Office 2007 documents, but this adds ability to index inside of One Note files and ZIP archives):
    1. Follow instructions at:
  14. Install Adobe iFilter, with 64-bit “thunking” DCOM service:
  15. Install MindManager extensions.
    1. DEPRECATED – We will discontinue this extension with the new upgrade as it does not work with MM v7 or v8
  16. Install ECTS components on each web front end server.
    • Having problems with installation script… what if we try the ECTS update available though CodePlex???
      1. If using updated ECTS files, it will be necessary to update the PartnerAdmin and PartnerConfig pages, as the self-service Site Collection Manager.  The existing pages will not work because the GUIDs on the Web Parts have changed.
      2. The ects_setup_sharepoint.vbs script still fails using the updated code… Since the codeplex team has not documented their changes, I think we will skip this option.
    • Troubleshooting issues:
      1. The ects_setup_sharepoint.vbs script succeeds in installing the ECTS solution, but fails when activating site features.  I suspect that “cscript” on Server 2008 is not processing return codes from stsadm.exe correctly, and this is reporting failure to install features (I am not positive about the reason for the script failure, although it certainly is not a result of stsadm.exe being broken.
        I was able to work around this problem by opening the ects_setup_sharepoint.vbs file in a text editor, searching for the error string that was sent to the console when the script failed, then running all of the operations in the script manually from that point forward.  Fortunately, all of the stsadm commands in the script are successful when run from the command line.
      2. ECTS is not compatible with MS Load Balancing out of the box.  I switched to a F5 load balancer before working through the problem.  It is possible that the problem I was having could have been fixed with the same “loopback security check” that caused problems during our F5 configuration
        In fact, we may have had the problem even with the f5 in place, but I would not know because I applied the loopback fix before implementing the F5.
        The error codes suggest that a login failure is occurring between the IIS application and the AD LDS LDAP instance.  When I try to connect to the load-balanced LDAP DNS name using the “LDP.exe” LDAP client, I also get an authentication error.  However, when I connect to the local server address, authentication works.
      3. As was the case when I first installed ECTS, the web.config files required a bit of hand-tuning to get services working correctly:
        Once again, I had to modify the “ADAMConnectionString” in the web.config of each IIS site to reflect the actual DNS name of the load-balanced AD LDS servers.  I had installed ECTS using a different name initially, and the ECTS un-installation script did not clear out these values.
      4. I did find it necessary to deactivate all ECTS site collection features, re-activate them, then perform an IIS reset before my existing ECTS management pages would work again.  This seems pretty par for the course when removing and re-installing SharePoint solutions.
      5. After connecting the production content database and changing the URL for the server, I also needed to edit “LDAPHost” value to reflect the production name of the server in the “TEMPLATEFEATURESECTSBaseFeature.xml” file (located in the “12” hive).  This value was set at the time that ECTS was deployed to the server.  I expect that the same (wrong) LDAPHost value would get replicated to additional ECTS servers if we were to add new ones, because this setting is contained in the RESX package that gets built by the ECTS installer, and cached in the database for future deployment.  I expect I would need to do a full retract/remove/redeploy to fix the problem permanently.
  17. Install Globally-deployable solutions from the “fab 40” application template.  If you deploy a web front end into an existing farm, the files required by these features will get transferred automatically.  However, when building a new farm, we need to install them manually.  Currently required “server admin” templates are:
    • ApplicationTemplateCore
    • ChangeRequest
    • ContactsManagement
    • DocumentLibraryReview
    • EventPlanning
    • HelpDesk
    • InventoryTracking
    • ITTeamWorkspace
    • Knowledgebase
    • LendingLibrary
    • PhysicalAssetTracking
    • ProjectTrackingWorkspace
    • RoomEquipmentReservations
    • Procedure:
      • stsadm -o addsolution -filename <file_path><template_name>.wsp
      • stsadm -o deploysolution -name <template_name>.wsp –allowgacdeployment
  18. Install radEditor:
    1. Install ASP.NET Ajax for .NET 2.0, version 1.0
    2. Follow the Ajax configuration for SharePoint configuration guide found here:
    3. Install radEditor using the included instructions.
    4. Copy radEditor configuration files from an existing production server to the new server:
      1. In the directory:
        ”C:Program FilesCommon FilesMicrosoft Sharedweb server extensionswpresourcesRadEditorSharePoint[versionString]RadControlsEditor”
        Backup the existing ListConfigFile.xml, ConfigFile.xml, ListToolsFile.xml, and ToolsFile.xml files.  Replace with versions customized for UVM.  Note that the MOSS LinkManager tool does not work in WSS.  Also note that when editing list content that does not support “Enhanced Content”, the first toolbar in the ListToolsFile.xml will be removed… in past versions, the toolbar named “enhancedTools” was removed.
      2. Copy the files ListConfigFile.xml, ConfigFile.xml, ListToolsFile.xml, and ToolsFile.xml to all other nodes in the cluster.
      3. perform an IISRESET.
      4. Update ONET.xml files in the “12” hive to activate the radEditor feature by default in all new sites (see ONET.xml template files on the prod web front end for examples).
        NOTE that the “RadEditor for non-IE browsers” and “RadEditor for IE” features have been collapsed into one unified feature.  Update the ONET.XML files accordingly!  (note that the feature ID for the main RadEditor List editor has not changed… only it’s name is different.  We did not have to insert a new default feature ID, but we did need to remove the “RadEditor for IE” feature because it is no longer present in RadEditor MOSS.)
      5. Run:
        stsadm –o uninstallfeature –name RadEditorFeatureRichHtml.
        This “Web Content Management” feature is not supported in WSS, so we may as well remove it to avoid confusion.
      6. Deactivate and then re-activate the radEditor features on at least one existing site, and test functionality.
  19. Install “Smiling Goat” Feed Reader (RSS/ATOM subscriber web part)
    1. This will require Feed Reader users to update their web parts!
    2. Current release here:
      (version at the time of this writing)
  20. Install SharePoint Training Kit:
  21. Tune web application settings to match production server:
    1. In “Operations”:
      1. Configure blocked file types – in our case we have been requested to all “MAT” files.  MAT is one of may obscure Office file extensions, but also is used by MatLab, a common mathematical/statistical application used on campus.
      2. Enable Usage Analysis Processing – relocate log files off of the system volume.
        1. You MUST grant read/write access to the local WSS_WPG to the folder that will hold the usage analysis logs.  If you fail to do so, you will see lots of errors in the Diagnostic Logs allong the lines of “Cannot create folder “<big ugly hexidecimal number>””.
      3. Configure incoming and outgoing email settings.
        1. Incoming mail requires SMTP to be installed on the server – managed with the old-school IIS 6.0 Manager MMC.
        2. We needed to use the “Advanced Settings” to specify the SMTP drop folder in the Incoming Mail settings page of Central Administration.  SharePoint diagnostics logs indicated that the timer service could not determine the drop folder location automatically.
        3. After configuring the mail drop folder, SharePoint still was unable to process messages.  “Access denied” events started queuing up in the event viewer.  A quick analysis using SysInternals ProcMon shows that the WSS Access account (being used by the OWSTIMER.EXE process) has no ability to access the drop folder defined above.  I just added “modify” access for that account to the drop folder, and incoming mail started processing immediately.
    2. Under “Application Management”:
      1. Tune the “Web application general settings”:
        1. Set upload limits for files
        2. Set time zone
        3. Set quota templates for new sites
      2. Config “site use confirmation and deletion”. <- Be sure to do this after PROD cutover!!!  Can’t do it ahead of time as users will start to get email from the pre-prod server!
      3. Enable self-service site creation. <- Also needs to be done after prod cut-over
      4. Configure “policy for web application” to allow selected groups of SharePoint administrators rights to all sites in the farm.
    3. Edit the footer of the “welcome” email message starting at line 5219 of “core.en-US.resx” in the 12-hive “resources” folder.  Replicate on all web front ends in the farm.
  22. Configure f5 load balancers:
    1. Lots of IP addresses required:
      1. “floating ip” for network on which F5s will communicate with the SharePoint web servers.
      2. 2x Self-IPs for the physical interfaces on the F5s which will handle the floating IP
      3. 3x IP addresses for the public SharePoint URLs (standard, lite, partner)
      4. 6x IP addresses for the private interfaces on each web server node (for standard, lite, and partner sites, one each on two web servers)
    2. Need to generate SSL Certificate/Key pair files for the load balancer “SSL Profiles” (the F5 uses separate files for certificates and keys, as opposed to IIS which uses a single CER file – we need to split the IIS files and upload to the F5.
      1. Using the certificate manager MMC snapin, export the production certs to PFX (PKCS #12) files (make sure to specify that you wish to export the private key, otherwise PKCS #12 is not an option in the wizard).
      2. Convert the pfx file into a PEM using the openssl executables (available for Windows here: http://www.slproweb.com/products/Win32OpenSSL.html) using the folowing command:
        openssl.exe pkcs12 -in [infile].pfx -out [outfile].pem -nodes
        (You will be prompted to enter the password for the PFX, if it has one.  The “-nodes” flag indicates that the output file will not be encrypted with a password)
      3. Open the PEM file created above with a text editor.  Remove the  private-key portion of the file (all content from “Bag Attributes” to “—–END RSA PRIVATE KEY—–“) and paste this into a separate “KEY” file (UNIX-style line endings, ANSI-encoded).
      4. Upload the new PEM and KEY files into your F5 load balancer to create a new SSL client profile.
    3. Set up two virtual servers for each SharePoint URL – one to redirect HTTP to HTTPS, one to handle regular traffic.
      1. Add the following iRule to perform the SSL redirect, and attach it to the Virtual Server listening on port 80:
        (This rule will redirect your browser to https://sharepoint.uvm.edu from either http://sharepoint or http://sharepoint.uvm.edu… well, actually it will redirect any non-HTTPS connection to a HTTPS version of itself, appending our domain “.uvm.edu” if it is missing.)

           1: when HTTP_REQUEST {
           2:   if { [HTTP::host] == "*.uvm.edu" } {
           3:     HTTP::redirect https://[HTTP::host][HTTP::uri]
           4:   } else {
           5:     HTTP::redirect https://[HTTP::host].uvm.edu[HTTP::uri]
           6:   }
           7: }
    1. Test each feature on both web front ends by alternately disabling the nodes in the load balancer configuration.
    2. Test again with both nodes enabled… watch for authentication and session persistence issues.
    3. Test all features in each access mapping – SP, SPLite, and Partner… web.config file variations could cause problems!
  24. Consider Deployment of “Group Board 2007” and “Sample Master Pages”:

And to complete the cutover:

  1. Redirect SharePoint traffic to a “maintenance underway” page, or just put the service in read-only mode.
  2. Backup the current production content database
  3. Detach the content database from the production server.
  4. Attach the content database to the new servers
  5. Change the SSL profiles used on the F5 load balancers to use the production certs.
  6. Take down the original SharePoint IIS server and disable.
  7. Change the listening port on the F5 load balancers to match the production server IPs.
  8. Configure Zone security and Alternative Access Mappings for the Farm to reflect the new production names:
    1. Make sure that https://sharepoint.uvm.edu, https://sharepointlite.uvm.edu, and https://partnerpoint.uvm.edu are all listed as “Public URLs” for the farm.
    2. Make sure that the HTTP equivalents of these URLS are listed as INTERNAL URLS in the AAM table if you are performing SSL termination on the load balancers.
  9. Contact SAA Mail admins to update the mail server LDAP records for SharePoint mail routing
  10. Dredge though this post for things that will need to be configured after the migration, and test mercilessly.