Category Archives: SQL Server

Notes from the MS SQL Server battlefront

Using SQL Logins with “AlwaysOn” Availability Groups

I had some fun today configuring a SQL 2012 “AlwaysOn” Availability Group working with a database that was configured to use SQL Logins.  I am working with the vendor to see if we can use Integrated Authentication instead, but in the meantime I managed to get failover functional.

The problem I was experiencing was that on failover, the database users defined in the database lost their mappings to the SQL Login on the server.  I had created SQL Logins on both nodes of the Availability Group with identical usernames and passwords, but the mapping still failed.  Turns out this was happening because SQL Logins (like AD and NT accounts) have SIDs, and these SIDs are used to map database users to logins.  To correct the problem, I needed to create SQL Logins with identical SIDs on the two servers.


  1. Create SQL Logins on one node of the Availability Group and perform mappings.
  2. Lookup the SIDs of the users using the following query:
    SELECT SUSER_SID (‘[SqlLogin]’)
  3. Script out creation of the matching logins:
    USE [master]
  4. Failover the Availability Group and verify that user mappings are working as planned.  Verify that passwords are working, too.

Another useful tidbit here is a script to re-map SQL database users to local SQL logins:

USE myDatabase
sp_change_users_login @Action='update_one', @UserNamePattern='databaseUserName', @LoginName='SqlLoginName';

Really I am surprised that I did not run into this problem with our SQL 2008 mirrored databases.  The problem should have been present there as well.

SQL Reporting Services – Load Balancing Notes

As part of our evaluation of a SQL Consolidation Server, I am attempting a deployment of SQL Reporting Services in a load-balanced environment (using F5 load balancers in layer 4/fast npath config). I have followed my usual procedure for setting up a npath/direct server return config on Windows Server (with a couple of caveats… on Server 2012 the name of the MS Loopback Adapter has been changed to ‘Microsoft KM-TEST Loopback Adapter’). I then installed Reporting Services on both load balanced servers and connected them to the same back end database, and configured them to use the same SSL certificate.

Problems arise when attempting to connect to the load-balanced Reporting Services URL over SSL. Initial connection succeeds, but authentication fails. Log trawling reveals the following event:

Log Name:    System
Source:      LSA (LsaSrv)
Event ID:    6037
Level:       Warning

The program ReportingServicesService.exe, with the assigned process ID 13432, could not authenticate locally by using the target name HTTP/ The target name used is not valid. A target name should refer to one of the local computer names, for example, the DNS host name.

Try a different target name.

I found a similar error discussed at the following site:

The solution there worked for me, too:

  1. Go to REGEDIT and open the key HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlLsaMSV1_0
  2. Right click MSV1_0 –> New -> Multi-String Value
  3. Type BackConnectionHostNames and click Enter.
  4. Right click on newly created value and select Modify.
  5. Enter the hostname of the site: WEBSITENAME (and on a new line enter the FQDN,
  6. Restart IIS

I also had considered enabling Kerberos authentication for Reporting Services, but I have decided to forgo this step (since, I think, we will want users to be able to retrieve reports from non-domain-joined computers).  However, if we did want to enable Kerberos, three things need to happen:

  1. Use SETSPN to add the HTTP/[reportServerHostName] [domain][reportServerServiceAccount]
  2. Configure the Reporting Services Service Account to be trusted for Kerberos Delegation (using AD Users and Computers, under the “Delegation” tab)
  3. Modify the rsreportserver.config file (in %ProgramFiles%Microsoft SQL ServerMSRS11.MSSQLSERVERReporting ServicesReportServer).  Find the “AuthenticationTypes” section, and specify <RSWindowsNegotiate/> as the first/only entry.
  4. Restart Reporting Services

Preparing the SharePoint 2007/WSS3 Database for Upgrade

  1. Run pre-upgrade check:
    stsadm –o preupgradecheck
    (to be run on the WSS 3 server… requires WSS 3.0 SP2 and October 2008 CU).
    Now, repair any problems that were reported…
  2. Delete orphaned sites:
    1. run: stsadm -o enumallwebs
      – find sites with “InSiteMap” set to “false”.
    2. take note of the SiteID GUID, then run:
      stsadm -o deletesite -force -siteid [siteid] -databaseserver [dbserver] -databasename [dbname]
    3. Delete references to missing web parts:
  3. The upgrade check report tells us:
    “ The following web part(s) are referenced by the content, but they are not installed on the web server Id = dcdbbbd0-8dd6-1ecb-a3b2-12d30061d482, Type = Unknown, Reference = 7, Status = Missing …”
    but there is no particular advice on how to fix the problem. I wasted a few hours trying to enumerate web parts in use by a site using PowerShell, and ended up digging around in the Content database using TransactSQL… here is what works:

    1. Run the following against the content database:
      use [contentDatabaseName]
      select DirName,LeafName from dbo.AllDocs where id in
      (select tp_PageUrlID from dbo.WebParts where
      OR (tp_WebPartTypeID='d5101cfe-e315-c578-cd06-1966f283e3ed')
      OR (tp_WebPartTypeID='602e7431-ac3e-75b9-c8e0-57533bdab161'))
    2. Access the page returned by the above query, appending “?Contents=1”.  Delete any web parts reporting errors.
  4. We are informed that various features are referenced by content, but that the features are not available on the server.  We are given only feature IDs.  Back to SQL:
    • use STSContentDBPrime
      select FullUrl from [Webs] where Id in (
      select WebId from [Features] where (FeatureId = 'bbe9def7-2fe9-a0b1-d712-aa128c837ebe')
      OR (FeatureId='bbe9def7-2fe9-a0b1-d7bb-aa128c837ebe')
    • This returns the sites that are using these “bad features”.  But what to do about it?
    • suggests the use of “WSSAnalyzeFeatures” and “WSSRemoveFeatureFromSite“, both of which work to purge the evil old CKS:Enhanced Blog edition feature from the one site that was using it… sheesh!
  5. We are warned that “The following site definition(s) are referenced by the content, but they are not installed on the web server”. The name of the missing site definition is “Unknown”. We are given only a “template id” (11003) and a Language Code (1033). Finding where this definition is being used can be accomplished with a very small sql query:
    use SharePoint_Content_1 
    SELECT Title, MasterURL, WebTemplate FROM dbo.Webs where WebTemplate='11003'
  6. We are warned that “setup files” are referenced in the database that are not present on the server. We are given the names and paths of the files that are missing, but not the web sites that reference them. The offending site can be tracked down quickly using “stsadm”. Run this command:
    stsadm -o enumAllWebs -includesetupfiles > allWebs.txt
    Then search the resultant file for one or more of the filenames listed in the pre-upgrade check report. In this case, I was able to locate a site that used “SharePoint Learning Kit” components. This site was completely broken, since we removed the Learning Kit over a year ago. I got approval from the site owner, and deleted the offending site.
  7. We are warned that the feature “f374a3ca-f4a7-11db-827c-8dd056d89593” is referenced by over 500 sites, but that the feature is no longer present on the server.  This feature is the “RadEditor for MOSS for IE”, which in fact has been phased out.  Lets use a handy “for” loop.
    1. First we export the SQL query we used above as a CSV file (this time searching for the “f374a…” feature instead).
    2. We use that csv as fuel for our “for” loop:
      for /f %i in (H:BadFeatureSites_2010.csv) do WssRemoveFeatureFromSite.exe -scope site -url -featureid f374a3ca-f4a7-11db-827c-8dd056d89593 -force
    3. Oh but look… WssRemoveFeatureFromSite can’t deal with sites that have spaces in the URL (why would you have a space in your site name, right?).  dang!  Let’s try “SharePoint Feature Administration and Clean Up Tool“… it works!
  8. We are told that we are missing some additional template files such as:
    Path = [C:Program FilesCommon FilesMicrosoft SharedWeb Server Extensions12Template1033MPS..stslistswplibdwpMSContentEditor.dwp], Reference = [2], Status = [Missing]

    • Some searching discovered KB839877, which informs us that an earlier version of the “Workspace Meeting” template contained an accidental double period when a single period was intended.  But how do we find the site that is using this bad template?
    • The excellent tool “SharePoint Manager” from Carsten Keutman allows us to explore the SharePoint object model to more quickly find Site Template data that we need in a site.
    • The really really excellent tool “PowerGUI Script Editor” was used to quickly develop this script:
      # Discovers Meeting Workspaces from all "Webs" in the SharePoint web application defined in the "webAppUrl" variable.
      # Outputs discovered data to file "sitelist.csv
      # Greg Mackinnon, 2010-02-05
      [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint") | Where-Object { $_.GetType().FullName -ne "System.Reflection.Assembly" }
      $webAppUrl = ""
      $wa = [Microsoft.Sharepoint.Administration.SPWebApplication]::Lookup($webAppURL)
      #[string]$out = "List of sites using the Meeting Workspace template"
      Remove-Item -Path .sitelist.csv -Force
      foreach ($site in $wa.sites) {
      foreach ($web in $site.AllWebs) {
      Write-Output "Checking web: " + $web.Url
      if ($web.WebTemplate -match "MPS") {
      Out-File .sitelist.csv -InputObject $web.ID.Guid -NoClobber -Append -Encoding Unicode
      #Out-File .sitelist.csv $out -Force
    • The script returns a list of “web” object GUIDs.  We feed this into SQL to discover where in the content database the sites that are using these web parts are implemented:
      use STSContentDBPrime
      --select DirName,LeafName,SetupPath from dbo.AllDocs where WebId='abee2623-56d2-43d6-8a9c-7b362fbd323b'
      select DirName,LeafName,SetupPath
      from dbo.AllDocs
      where ((WebId in ('siteGuid1','siteGuid2','...')) AND (SetupPath LIKE '%MSContentEditor.dwp%'))
    • Finally, the SQL query finds the “Meeting Workspace” sites that are using this corrupted web part.  We probably could just delete the web parts in question from the web part gallery, but I chose instead to use the stsadm.exe “export”, “deletesite”, “createsite”, and “import” commands to re-create the sites.  The export utility filtered out the corrupt content for us.  Once restored, the discovered sites no longer contained corrupted web parts.   Phew!
  9. The WSS3 instance has been prepped… Clone existing content database, and proceed with upgrade testing!

SQL Server 2008 – Configuration Notes For A New Mirror

With the impending release of SharePoint 2010, I have decided to try to get our backing SQL infrastructure up-to-date, and also to re-architect our system to use SQL mirroring.  This project has been easier than I expected, with a few inevitable bumps along the road:

  • The SQL documentation states that when setting up a new mirrored database connection, you must take a backup of the existing database, and restore it to the mirror server before configuring the mirror.  It is a bit vague on the particulars.  I found out the hard way that you must perform the recovery and not restore the database to a running state, otherwise the mirror setup will fail.  The worst part is that the mirroring wizard will not give a helpful error… it will claim that ‘no network endpoint is available for the mirror connection’, or some such rot, leading you to believe that you have a problem with your firewall.  So, make sure you do the following when setting up the database for mirroring:
    1. Backup the database and transaction logs separately
    2. Restore the database backup with the “WITH NORECOVER” option.
    3. Restore the transaction logs with the “WITH NORECOVER” option.
  • Concerning SQL Express:  SQL Express is approved for use as a Witness server in a SQL mirroring configuration.  However, SQL Express will not work as a Witness out of the box.  To get things running, you will need to:
    • enable the “TCP” protocol in the “SQL Server Configuration Manager” tool.
    • You also may need to configure the instance to use the static TCP port “1433” for SQL Services (also in the Configuration Manager.  On my installation, SQL Express defaulted to the use of dynamic ports for SQL, which made for a good deal of trouble in configuring the Windows firewall.
    • You may need to enable the SQL Browser service in “Configuration Manager”, and open UDP port 1434 to your management station.  Without the browser service, I could not get SS Management Studio to talk to SQL Express.  Boo
  • If configuring databases on Windows Server 2008 or 2008 R2, you will have the option to use the much-improved Windows firewall to secure your SQL instance.  I have chosen to restrict TCP access to SQL services to only those hosts that need connectivity (at least for now).  Unfortunately, SQL 2008 does not create any firewall rules or firewall rule templates for you, so you will need to do it yourself.  Here are the required ports:
    • TCP Port 1433 – For basic client/server connectivity, and remote administration
    • UDP Port 1434 – For the SQL Browser service… not needed in most cases, but required to enable connectivity to SQL Express instances in my case (see above).
    • TCP Port 5022 – The Mirroring Endpoint default port… required open on all SQL servers that will be used in a mirror configuration.  SQL clients do not require connectivity to this port… only primary, mirror, and witness servers in a cluster.
    • TCP Port 135 and Program-based access to “%ProgramFiles%Microsoft SQL Server100DTSBinnMsDtsSrvr.exe” – Required to establish SQL Server Integration Services remote connectivity.  These are DCOM-based services that use those annoying Dynamic RPC ports.  SQL clients should not be allowed to connect to this port.  I have it open only to the systems from which I plan to plan to perform SQL management operations.
  • Concerning Maintenance Plans:  MPs are made somewhat more complicated on systems using mirroring.  Clearly you want your Maintenance Plans to run regardless of where your primary mirror happens to be at any given time.  However, DBCC and backup operations can be performed only on the primary database in a mirror set.  So how do you ensure that your maintenance plans will always run on the primary?  A common solution seems to be to create identical maintenance plans for both mirror servers, but to use the “Ignore databases where the state is not online” flag in the SS Management Studio database selector dialog (“WITH SKIP” T-SQL argument).  This also brings up the question of how you can replicate those tediously configured Maintenance Plans between your different mirrors.  The answer involves SSIS packages and the SQL Server Business Intelligence Development Studio (an optional component in a SQL Server installation):
    • Use SS Management Studio to connect to “Integration Services” on the SQL instance where you have configured your maintenance plans.
    • Navigate to “Stored PackagesMSDBMaintenance Plans”… hey look… all of your maintenance plans are just SSIS packages!
    • Right-click the plan you want to copy, then select “Export Package”.
      • Select “File System” for your package location
      • Browse to the location where you want to save your package and click “OK”.
    • You could now use the BI Development Studio to edit the package file, but the smarter thing to do is to open the file using a text editor.  I like “Notepad++”.  Find and replace all instance of the name of your source SQL server with the name of your destination SQL server.  If you fail to complete this step, your Maintenance Plans will fail with connection errors.  Why?  Because the SSIS export function preserves the local server connection strings, causing your destination system to attempt to backup databases on a remote computer instead of itself.  Boo!
    • Connect to Integration Services on the destination SQL server.  Navigate to the same “Maintenance Plans” directory, right-click and select “Import”.  Again select “File System” for the package location.  Browse to the file you just edited, and click “OK” to import.
    • Verify that the imported package is configured correctly using the Maintenance Plan editor.
    • You will need to update the schedules for your plans because schedules are not an integral part of an MP.  This is “metadata” that does not get exported.
    • Remember to repeat this procedure every time you update your MPs on either server in your mirror configuration.

Migrating Symantec AntiVirus management servers

Well, it has been a fun week of migrating our Symantec AntiVirus servers from old, dying Dell 5th-gen PowerEdge servers onto bleeding-edge ESX virtual machines. Here are some of the highlights:

Firewall changes:
In moving the servers, we had to assign new IP addresses in our protected 102.0 subnet. Thus, I had to research the firewall exceptions required for access to the servers. It seems the two required ports are:
TCP port 2967 (Inboud) – for Symantec AntiVirus service (RtVscan.exe), for AV definition push updates, and client monitoring
UDP port 38293 (Inbound) – for Intel PDS service (pds.exe), allows retrieval of AV policy settings
(initial rules were not correct, resulting in clients falling out of the mangement cycle)

LiveUpdate changes:
I have been wanting to change the address of our internal LiveUpdate server for awhile… we are now using as the primary distribution server, with,, and as backups. “” is a round-robin record that alternates between norton1 and norton2. We are considering a load balancing implementation instead, but this probably is unnecessary given the presence of “backup servers” in the liveupdate.hst file distributed to clients.
The only real problem here was that many of the file types in the LiveUpdate download directory were not of recognized “MIME Types” (i.e. they were not html, xml, zip, txt, audio/video, or MS Office files). I had to add the following extensions to the IIS configs before clients could successfully retrieve updates:
.x00, .ieg, .m25, .ia64ap, .x86, .lin
Once these MIME types were added and I had run an “iisreset”, LiveUpdate started to function normally.

Reporting Services
Migration of reporting services is a total PIA. I am trying to migrate the back-end database to an external SQL 2005 server from SQL 2000 in addition to re-installing the Reporting Services binaries on the new Norton2 server. Here are the steps taken so far:

  • detach the SymReport database from the old server, copy the files to the new server and attach
  • change ownership of the database back to its original setting
  • change the compatibility level of the database to “9.0” (SQL 2005)
  • install the new SQL native client on the SAV hosts
  • launch the Reporting services installer setup.exe. Note: do not launch from the autorun setup menu on the SAV CD! You must use the reporting services setup.exe or the advanced install options that we need will not be available.
  • supply the credentials necessary to connect to the new SQL 2005 DB. Also, specify alternative credentials for the db user, datasource name, and db name. Use the DB name that was imported into the SQL 2005 server, and get the username that was previously used from the DB security tabs.
  • After install, the reporting server should smoothly reconnect to the existing DB. You can check that this is happening in the SQL activity monitor pane.

Unfortunately, ran into some problems with the Reporting Agent on the primary SAV server (it is running a remote agent). The agent slowly hogs up all the memory on the box and is creating a CPU-bound condition (very bad news on an ESX host). I has no success trying to troubleshoot the situation, and I was not having fun… Using sysinternal tools I was able to watch the ReportingAgentLauncher thrash the heck out of some temp files that it was creating, but it never did anything with these files. I believe there must have been some bad configuration information being fed to the SAV server from the reporting database, and that this was creating a loop. So untimately I fixed the situation with the following “solution”:

  • Uninstall reporting services
  • Reinstall with a new database (thus abandoning old report data)

Voila… reporting services are running normall, we have our first production SQL 2005 database, and our second set of production ESX guests.

SQL 2005 setup on 2k3EE R2 cluster

Here are a few notes from my initial setup of SQL Server 2005, Standard Edition on our new IBM LS20 blades.

Our blades are equiped with 2x dual core AMD Opteron processors, two Qlogic FC HBAs, two Broadcom 57xx Ethernet ports, and 8Gb of RAM. We thusly are using Server 2003 64-bit edition for the install.

Server install and setup:

  • Installed OS from the BladeCenter Management Module, mounting Server 2003 ISO file from a neighboring blade’s web browser. It is slow and ugly, but works.
  • On first boot, we have no functioning network or FC HBA. I downloaded the latest Broadcom application ISO from IBM, mounted via managment module, and ran the installer. Also ran the “Broadcom Advanced Services” installer, then configured an Active/Active NIC team with VLANs for public networking (720) and the cluster heartbeat (4000).
  • On the heartbeat network, we disable File and Print services, as well as Client for Microsoft Networks. Be sure to disable NetBIOS over TCP as well. LAN speed duplex does not appear to be configurable with this driver, but the switch ports have been set to negotiate for 1Gbps speed only, and PortFast is on to reduce port blocking time on initial NIC connect.
  • Once networking is functional, I am able to transfer Qlogic HBA drivers to the server, and install them in the Device Manager control panel.
  • We also transfer the DS4000 Storage Manager for Windows installer to the server (obtained from, and then perform a client install of the software (this installs the RDAC multi-path I/O driver onto the server).
  • Configure LUNs on the DS4800 storage server, configure LUN masking, and configure Zoning on the Brocade switches. SAN volumes are now available on the server.

Cluster Configuration:

  • Setup Microsoft Distributed Transaction Coordinator as a cluster resource as well. Strictly follow instructions in MS KB301600. Created 1Gb lun for this task – drive letter “T”
  • Setup a cluster group for MSSQL server, add physical disk resources for MDF and LDF files (database and transaction logs) (Drives M: and L:)

SQL Setup:

  • Ensure that you are logged on to only one node of the cluster – setup will fail if there are active RDP sessions on both cluster nodes.
  • From, create “Servers” and “Tools” directories for CD1 and CD2 contents, respectively.
  • Run SQL Setup from the primary node in the cluster, source files extracted to our primary file server using the base path \softwaresql2005server.
  • Start with an installation of Workstation Components only… need to run this install on each node separately
  • Run SP1 patch on both nodes to update SQL Setup Support files.
  • Install all remaining SQL components (Database, Analysis services, Reporting, Notification, Integration). Select option to create failover cluster for Database and Analysis services, select drive M: as the destination for data files.
  • Configure service accounts for every service
  • Set server to “mixed” authentication mode for support of some of our apps
  • Keep default collation settings (this uses SQL Collations for the Database service and Windows Collations (Latin_CS_AS) for Analysis Services).
  • NOTE:Reporting Services will have to be configured after SQL install
  • enabled automatic error reporting but NOT usage statistics.

Next Steps:

  • – Review use of clustering with Notification Services
  • Review Reporting Services deployment options:
    • Separate Reporting Services Instance for each application that uses it?
    • Central Reporting Services web cluster? (Would require 2x additional Windows 2003 EE installs!)