Javascript in Data View Web Part XSLT

One of the great frustrations working with XSLT in Data View Web Parts is the very limited set of functions available in XSLT 1.0 available in SharePoint 2007. However (as in so many things) javascript offers a solution.

This method is the simplest I have found and is based on posts at SharepointalistProgrammingsharepoint and  Sharepointboris.

Inserting the javascript functions

The method here works for inline scripts, but would probably also work with script references

  1. Locate the root XSLT template
  2. Insert the script within a CDATA tag

Calling the javascript functions


One method is to use xsl:attribute to build a link and add an onclick attribute

using parameters and writing results back to template

This example takes the formatted modified date, passes it through a javascript function, and writes out the result

The countup() javascript function ends with document.write(var); to output the result into the xslt

uriencode the page location for use the source element of a link

This would be particularly useful for making DVWPs portable and for working with InfoPath Form Libraries.  See URI encode Source attribute in SharePoint 2007 Data View Web Part calling an InforPath form for a solution that uses these techniques.

Adding links in SharePoint Data View Web Parts

I am sure this is well documented elsewhere, but it is something that always flummoxes me when I am working on a Data View in SharePoint Designer 2007. Typically you will want to add a link to the field in one of the columns in the data view so users can view the detail. One way to achieve this is with Web Part Connections that allow you to pass filter values to web parts on this or other pages.  However I was looking for a simple hyperlink that would open the list item view using the ID of the record.

<xsl:value-of select="@ows_LinkTitle"/>

However simply trying to use the concat function with strings of html fails to parse

<xsl:value-of select="concat(‘<a href="/research/resportal/events/Lists/Events/DispForm.aspx?ID=’, @ows_ID, ‘">’, @ows_LinkTitle, ‘</a>’)" />

There are two solutions to this

  • using the attribute tag: probably the right way to do this
  • creating a variable: almost certainly a lot slower, but more flexible

The Attribute Tag

The facility to add attributes to an <a> is built into XSLT

This approach elegantly mixes HTML and values


Creating a variable

The trick is to declare the HTML strings as variables and then concat these variables

<td class="ms-vb"> <xsl:variable name="L1Text">&lt;a href="/Events/DispForm.aspx?ID=</xsl:variable> <xsl:variable name="L2Text">"&gt;</xsl:variable> <xsl:variable name="L3Text">&lt;/a&gt;</xsl:variable> <xsl:value-of select="concat($L1Text, @ows_ID, $L2Text, @ows_LinkTitle, $L3Text)" disable-output-escaping="yes" /> </td>

The only two tricks are

  1. all HTML tags in the strings need to be escaped, i.e. &amp;
  2. set disable-output-escaping="yes"

As the variables are static they could be declared earlier in the XSL so the are not reset on each iteration.

Synchronising through the cloud

[Now that I am using a Nokia N8 running Symbian ^3 Anna, I have updated these reflections in a new post]

I work in a pretty mixed economy when it comes to OSs and platforms

  • Windows 7 at work
  • Ubuntu Lucid (and above) at home and netbook
  • Android 2.2 work mobile
  • Symbian S60V3 personal mobile

I often have content that I want to synchronise across some or all of these platforms, and I want to do it without paying any money.


I am looking for

  • complete and live synchronisation between Windows and Ubuntu
  • selective synchronisation on the mobile devices i.e.
    • all the cloud hosted files are available on demand but not automatically synchronised
    • specified files/folders are synchronised when requested
    • new files/folders can be uploaded from the mobile device

Windows 7 <-> Ubuntu

The simplest solution I have found is dropbox as this has good synchronisation clients for both Windows 7 and Ubuntu.

Sugarsync does not have an Ubuntu/linux client

Windows 7 <-> Android

The Android phone is a new addition and I am still trying to find the best mix of apps.  For synchronisation the dropbox app was a real disappointment.

Sugarsync seems to offer a better solution.  You can selectively sync specified folders between Windows and Android, this allows me to limit the sync to just those folders/files that are really live at the moment.

Symbian <-> anything

Symbian is the poor relation here.  I have been using Nokia Synbian smartphones for several years and have always found a way to get them to do what I want.  For navigation I find them better than the Android (so far at least) and the camera is just better quality.  Perhaps I will move away from the platform with the next upgrade, but I will take some persuading.

There is an unofficial client for Dropbox that looks as if it will do the trick.  However it seems to be primarily a web interface which allows access rather than syncing.  Comments also raise some security concerns.

UPDATE: Sugarsync have released an official client for Symbian, it says it is available through Ovi but I could not find it. Unfortunately it is not compatible with s60v3.  Bit of a pain really.

UPDATE2: The solution was there all along! The Symbian file manager has support for webdav built in.  Combine this with the dropdav service and you have a solution for working with cloud hosted files from Symbian s60v3. See The easiest way to use Dropbox on Symbian smartphones from the Independent Symbian Blog

The alternative seems to be to use the web interface.

Windows Ubuntu Android Symbian Web mobile web
Dropbox Y Y ? y (webdav) ? d,u
SugarSync Y Y Y ? d
Y = official client that meets requirements
y = unofficial client
? = a client but does not really do what I need
d = download
u = upload

Querying SharePoint 2007 Lists from InfoPath using XML

As a number of bloggers have noted, InfoPath does not read SharePoint lists quite as well as you would expect it to be able to do, at least in its 2007 incarnation.  The most obvious ways to query a SharePoint list have serious limitations

  • Creating a data connection using the SharePoint list wizard only allows lookups on the ID column (great for master->detail lookups but not a lot else)
  • Using the SharePoint web service GetListItems simply fails

The approach presented here builds on a Sharepoint Tips And Tricks article which explains these limitations and suggests an alternative approach.

Unlike in the article, in this situation we need to connect to SharePoint to look up a value in a list based on values on the InfoPath form, i.e using a respondent’s age, gender and body fat percentage the system returns a result ranging from “very lean” to “very fat” from a SharePoint list which has all the permutations for gender, body fat and ages (about 350 records).

By using an XML data source InfoPath is able to filter the SharePoint list data on arbitrary fields rather than just ID, in our case gender, age range and body fat.

Create an XML data connection

The process is the same as described in the article with one small exception.

  • add data connection
  • Choose XML
  • paste in the address in the form http://server/infopath/_vti_bin/owssvr.dll?Cmd=Display&List={listGUID}&XMLDATA=TRUE
    • http://server/infopath is the full path to the web where the list is found
    • {listGUID} is the list GUID (which can be found from the List Setting URL)
  • choose “Access the data from the specified location”
  • Give the data connection a name
  • Check “Automatically retrieve data when form is opened” (important!)
    • this is different from the article

The list data is now available to InfoPath, but with a few provisos.

Using the XML Data Connection

Opening http://server/infopath/_vti_bin/owssvr.dll?Cmd=Display&List={listGUID}&XMLDATA=TRUE in your browser will show you how SharePoint has “re-interpreted” the list data

  • all the column names are prefixed with “ows_” and use the “internal name” (i.e. no spaces)
  • number values are returned to thirteen decimal places
  • calculated columns are included (unlike in a SharePoint-type data connection), but the formatting may be different
    • in my case concatenating gender, age range identifier and body fat percentage returned a value  ows_uniq=”string;#Female130″

Bearing this in mind values from the XML data source can be used in the same way as other Secondary Data sources, in the XPath formula editor choose the column to return from within rs:data, z:row and click on the filter button to construct a filter.


This approach seems to be well suited to my situation where

  • the form is lightly used by a small group of people
  • the SharePoint list data is effectively static
  • the SharePoint list contents are quite small

If this was not the case the option of writing a small webservice proxy might have been preferable.

Calculating the difference in dates on InfoPath forms

Calculating the difference between two dates, as in a person’s age at a particular point in time, seems to be a pretty common request among people developing InfoPath forms for use in SharePoint.  In my case I followed the advice in Alec Pojidaev’s Blog but it did not seem to work for me, giving an age that was too high.

So I have now taken inspiration from Villeroy’s post on the OpenOffice.Org forums, Re: [Solved] Replacing DATEDIF in an Excel equation?, where he presents a generic method of calculating ages in spreadsheets using IF statements.  The logic behind the approach, for years at least, is pretty obvious and can be adapted to InfoPath, even though there is no IF statement as such.

How it works

The solution also takes advantage of several InfoPath features

  • the fixed date format in InfoPath allows you to extract the day, month and year portions using the substring function.
  • rules are applied in order so you can simulate the IF statement logic from the spreadsheet formula
  • conditions on rules can include expressions (at the bottom of the list of fields) again helping to implement the IF logic

The Fields

There are 3 fields in InfoPath: DoB (date of birth), DateOfTest, and Age

Age is a read only field

DoB is a required field

DateOfTest is also required and is the field with the rules attached in this example (not really the best idea see below)

The Rules

This solution uses three rules to simulate the IF statements in Villeroy’s solution.  They are applied in order, but only one is ever run because of the conditions

Rule 1,  Month is less and Age is blank


  • Age is blank AND
  • DoB is not blank AND
  • The expression substring(., 6, 2) < substring(../my:DoB, 6, 2)
    • substring(., 6, 2) extracts the Month as 2 digits from DateOfTest
    • substring(../my:DoB, 6, 2) extracts the Month as 2 digits from DoB   (your XPath may vary)


  • Set field Age to
  • substring(., 1, 4) – substring(DoB, 1, 4) – 1
    • substring(., 1, 4) extracts the Year as 4 digits from DateOfTest
    • substring(DoB, 1, 4) extracts the Year as 4 digits from DoB
    • -1 because we have not reached our client’s birthday yet

Rule 2,  Month is the same, the Day of Month is less and Age is blank


  • Age is blank AND
  • DoB is not blank AND
  • The expression substring(., 6, 2) = substring(../my:DoB, 6, 2) AND
    • i.e. both dates have the same month
  • The expression substring(., 9, 2) < substring(../my:DoB, 9, 2)
    • i.e. the day of the month for DateOfTest is less than the day of the month for DoB


  • Set field Age to
  • substring(., 1, 4) – substring(DoB, 1, 4) – 1
    • i.e. the same as in Rule 1

Rule 3,  Month is greater and Age is blank (i.e. everything else)


  • Age is blank AND
  • DoB is not blank


  • Set field Age to
  • substring(., 1, 4) – substring(DoB, 1, 4)

Why we should not just have rules attached to the DateOfTest field

In this example it is just the DateOfTest field that has the rules.  This is because in this example the DateOfTest field comes later in the form and we can expect users to fill it out sequentially.

However, if there was a danger that users might complete the DateOfTest field before completing the DoB this particular approach would not work.  To overcome this limitation a similar set of rules should also be attached to the DoB field that would be triggered if the DoB field was filled in second.

Permanent URLs in SharePoint document libraries

We are looking to embed links to documents in SharePoint document libraries into spreadsheets and other documents to make it easier for users to quickly jump to the relevant document.  This is pretty trivial at one level, use right-click to copy the URL and paste into the target application, but what if the document moves?

I am not thinking of documents moving between document libraries, I am just thinking about documents to move from the “live” folder to the “archive” folder.  The problem is that the document URL includes the folder name.  This persists even if you change the view so that it does show folders.  Looking at the library in SharePoint Designer it is clear that SharePoint “actually” stores documents in folders.  In fact it looks as if the document does not have a guid but is linked to a particular list


Redirection Library

A “global” solution that would not only accommodate this use case but would also manage the situation when content gets moved to another library would be implement something link the Zeven Seas Link Conductor which uses a redirection table to maintain redirection links.  While this is a neat solution it has drawbacks in the this instance

  • you would need to add an entry for every item even if it never moved
  • any changes have to be manually entered

Never move a document

Rather than using folders it would be possible to reorganise the library so that a column holds a status entry rather than moving documents from folder to folder.  The Encoded Absolute URL link would then remain the same.

In most circumstances this would be the most straightforward solution.

Other options

It would be great to hear if there are other options.

Blogging from eeePC


The software seems to work on the eeePC but it is a bit crude.

There is no support for downloading categories from the blog and no styled preview, but it is much more fundamental than that.

But the editor is plain text! With no built in support for bulleted lists, etc. You can insert links, images and tables, but who cares when you have such a limited editor. You cannot even get the windows to layout properly on the tiny eeePC screen.


The firefox plugin ScribeFire works much better and looks like the real solution. The only problem is working offline, but that is a small price to pay.

Spoke too soon. Scribefire also has problems with the small screen and insists on running the main editing box beyond the right edge of the screen. This may be a theme problem though……

I have now updated the eeePC Firefox with the Whitehart theme and Tiny Menu and this seems to have solved the problem.

Offline blogging with LiveWriter

I have been using Windows LiveWriter a lot for my blogging at work (a private development log) and it has revolutionised the way I blog. From being something I had to force myself to do it has become something of a habit.

I am wondering if setting up the same thing on this "public" blog will have the same effect.  It was always more of a chore to blog in DotNetNuke than in WordPress so the impact of the technology may be greater for the work blog than on this one.  But the work log has a pressing purpose which this one sadly lacks.

I am also experimenting with offline editors for my Asus eeePC (still running the original Xandros Linux operating system).  Again this should make the practice of blogging easier from a technology point of view.

But then technology has not really been the problem …

Xubuntu 7.1 (Gutsy Gibbon) with vnc

Now I have set up vnc on Xubuntu before, so why should it be so difficult this time? As before my guide was the extensive ubuntu forums thread. The thread has grown since I last looked and few new gotchas have emerged–although to be honest they are more figments of my imagination rather than real issues. The following are observations on what I learned (and in some cases “mis-learned”) from the thread.

One of the things that threw me were the references to vnc4server not working properly on 7.1 AMD64. My experience is that, if you get everything else right, the default vnc4server packages work fine.

Another thing that held me back was not being able to test the vnc installation with the local viewer. This made me think that it was not working when (perhaps) it was.

Checking the actual location of the fonts directories is something that carries over from the last experience, although I don’t think the location has changed since the last version.

The issue that had me foxed for the longest was the server_args string in /etc/xinted.d/Xvnc. While I was experimenting with tightvncserver and launching vncserver from the command line the option “-query localhost” had got lost form the string. As a result the vncviewer showed the X-windows grey screen, but no login page. A number of people had observered this problem in the thread, but nobody had been stupid enough to cause it by messing up the command to launch the server.

Along with several people who have commented on the thread, there were a number of times I had to wonder why I was struggling so hard to set up what should be a pretty simple vnc server. Other options are discussed in the thread, and I also have another ubuntu workstation that uses NoMachine NX for remote administration. But the main thing with vnc is that you get the option to power off and restart the machine. As the server runs “headless,” i.e. no screen, this is vital. I could certainly administer it through a terminal interface using SSH, but if I am going to have graphical remote access it might as well let me do everything I need to do.

Xubuntu 7.1 (Gutsy Gibbon) with Software RAID

With an expanding music collection I wanted to avoid the chore of backing up to multiple DVDs by building a RAID server so that at least a hard drive failure would not compromise the collection. This has not been as simple as I had hoped.

Mistake 1: I bought a motherboard specifically because it supported RAID5, but then decided I could not afford 3 disks and ended up using RAID 1. And anyway Linux can use software RAID rather than the Windows driver-based version supported by the motherboard.

I tried a number of approaches before this particular combination worked for me. I am sure there are other more elegant ways of achieving the same end, but hopefully my experience will save somebody else some time.

Starting point

This is a completely new system which has never been formatted. The key components for the installation are

Installation Options

The main trick was to persuade the installer to give the options to set up the RAID array during installation. To get the options

  1. boot from the alternate installation disk
  2. at the main menu hold down F6 (options) until you get the choice of Normal and Expert mode
  3. choose Expert mode
    • it can be worth a “dry run” in normal mode to get used to the principal installation options, but I could not see how to install a RAID system that way
  4. start the installation
  5. the console-based system is pretty tedious and I accepted the defaults for every step except …
  6. when offered the choice to load additional modules choose the MD multi disk option
    • I am writing this from cryptic notes taken during the process so I am not sure of the exact names, but the key one is the multidisk option
    • The configuration I chose does not use LVM but I did selectLVM at this stage. I am not sure if it was ever used.
  7. when the partitioner starts choose manual
    1. My configuration was to create a single big partition on both disks, for the RAID and a small swap partition on both disks which are left “un-RAID-ed”. This is not completely fault tolerant, but should be pretty easy to recover
    2. select the disk to partition
    3. create the main partition and set the Use As option to “Physical Volume for RAID” and set to be bootable
    4. create the swap partition and set Use As option to “swap”
  8. (the following is what I noted down, but may be because I did not read the screen properly)
    1. after specifying the partitions on both disks choose the option to write the changes to disk
    2. a warning message was displayed that there was no active partition
    3. choose continue
    4. the partition manager page is displayed again but with a new option at the top to build the Software RAID
  9. Software RAID installation is done through the MD administration module you added to the installation earlier.  The options I chose were
    1. RAID 1
    2. as /dev/md0
    3. 2 drives
    4. no spares
    5. sda1 and sdb1
  10. When complete the MD administration returns you to the partitioner but there is now a new device to partition, a “RAID device.” Partition this as usual
    1. file system ext3
    2. mount at /
    3. (the bootable option is not available)
  11. Write the changes to disk in Partitioner and this time there is no warning
  12. The installation process continues as usual
  13. When the GRUB installation page came up I chose the default option, i.e. install in the master boot  file of hd0.
    • This means the the RAID array will probably not boot if /dev/sda fails
    • If /dev/sda does fail the plan is to rewire /dev/sdb as /dev/sda (i.e. switch the connectors) and make it bootable with a rescue disk.
    • The RAID array would have to be rebuilt manually after adding a replacement /dev/sdb
  14. The Xubuntu system now boots as normal with md0 as the active partition

The installation now appears to be operating properly and I have begun setting it up as a our local server.  Running cat /proc/mdstat suggests that RAID1 is working fine, and df shows a root partition on /dev/md0 which is the right size, but otherwise it is completely transparent.
Background Reading
As I have explained, the procedure above was the result of several abortive attempts to install a RAID filesystem and get it to boot.  During these experiments through to my eventual success I used the following pages and posts that provided help, reassurance and inspiration, even if I was not always able to follow their advice.  Thanks to all those who took the trouble to share what worked for them.